Subsections of Linux Reader

Ansible

Subsections of Ansible

Ad Hoc Ansible Commands

Ad hoc commands are ansible tasks you can run against managed hosts without the need of a playbook or script. These are used for bringing nodes to their desired states, verifying playbook results, and verifying nodes meet any needed criteria/pre-requisites. These must be ran as the ansible user (whatever your remote_user directive is set to under [defaults] in ansible.cfg)

Run the user module with the argument name=lisa on all hosts to make sure the user “lisa” exists. If the user doesn’t exist, it will be created on the remote system: ansible all -m user -a "name=lisa"

{command} {host} -m {module} -a {"argument1 argument2 argument3"}

In our lab:

ansible all -m user -a "name=lisa"

This Ad Hoc command created user “Lisa” on ansible1 and ansible2. If we run the command again, we get “SUCCESS” on the first line instead of “CHANGED”. Which means the hosts already meet the requirements:

[ansible@control base]$ ansible all -m user -a "name=lisa"

indempotent Regardless of current condition, the host is brought to the desired state. Even if you run the command multiple times.

Run the command id lisa on all managed hosts:

[ansible@control base]$ ansible all -m command -a "id lisa"

Here, the command module is used to run a command on the specified hosts. And the output is displayed on screen. To note, this does not show up in our ansible user’s command history on the host:

[ansible@ansible1 ~]$ history

Remove the user lisa from all managed hosts:

[ansible@control base]$ ansible all -m user -a "name=lisa state=absent"

You can also use the -u option to specify the Ansible user that Ansible will use to run the command. Remember, with no modules specified, ansible uses the command module: ansible all -a "free -m" -u david `

Ansible Builder

Build portable control nodes packaged as containers. (Execution environments)

  • Works with AWX and Ansible Navigator for playbook development and testing.
  • Able to choose specific Python and Ansible-core version
  • Also package with Python packages, system packages, and Ansible collections.

Steps needed:

  1. Install ansible-builder
  2. Make sure podman is installed
  3. Make an execution-environment.yml file that includes:
    1. Base container image
    2. Python version
    3. Ansible-core version
    4. ansible-runner version
    5. collections with version restrictions
    6. system packages with version restrictions
    7. Python packages with version restrictions
    8. other items to download, intsall, or configure
  • If base image includes Python you omit that.

Ansible builder execute’s two steps:

  1. create containerfile for podman or Dockerfile for docker based on the definition file
  2. run containerization tool to build an image based on the build instruction file and build context

ansible-builder build

  • runs both steps

ansible-builder create

  • runs first step only

Building images with ansible-builder

Four stages to build a container image:

  1. Base: pull the base image, installPython version, pip, ansible-runner, and ansible-core
  2. Galaxy: download collections and store them locally as files
  3. Builder: download python/system packages and store them locally as files
  4. Final: install downloaded files on the output of the base stage and generating a new image that includes all the content.

Ansible Builder injects hooks at each stage of the container build process so you can add custom steps before and after every build stage.

You may need to install certain packages or utilities before the Galaxy and Builder stages. For example, if you need to install a collection from GitHub, you must install git after the Base stage to make it available during the Galaxy stage.

To add custom build steps, add an additional_build_steps section to your execution environment definition.

Install: pip3 install ansible-builder

Ansible Facts

An Ansible fact is a variable that contains information about a target system.This information can be used in conditional statements to tailor playbooks to that system. Systems facts are system property values. Custom facts are user-defined variables stored on managed hosts. system.

Facts are collected when Ansible executes on the remote system. You’ll see a “Gathering Facts” task everytime you run a playbook. These facts are then stored in the variable ansible_facts.

Use the debug module to check the value of variables. This module requires variables to be enclosed in curly brackets. This example shows a large list of facts from managed nodes:

---
- name: show facts
  hosts: all
  tasks:
  - name: show facts
    debug:
      var: ansible_facts

There are two supported formats for using Ansible fact variables:

It’s recommended to use square brackets: ansible_facts['default_ipv4']['address'] but dotted notation is also supported for now: ansible_facts.default_ipv4.address

Commonly used ansible_facts:

There are additional Ansible modules for gathering more information. See `ansible-doc -l | grep fact


package_facts module collects information about software packages installed on managed hosts.

Two ways facts are displayed

Ansible_facts variable (current way)

  • All facts are stored in a dictionary with the name ansible_facts, and items in this dictionary are addressed using the notation with square brackets
  • ie: ansible_facts['distribution_version']
  • Recommended to use this.

injected variables (old way)

  • Variable are prefixed with the string ansible_

  • Will lose support eventually

  • Old approach and the new approach both still occur.

    • ansible ansible1 -m setup command Ansible facts are injected as variables.
    ansible1 | SUCCESS => {
        "ansible_facts": {
            "ansible_all_ipv4_addresses": [
                "192.168.122.1",
                "192.168.4.201"
            ],
            "ansible_all_ipv6_addresses": [
                "fe80::e564:5033:5dec:aead"
            ],
            "ansible_apparmor": {

Comparing ansible_facts Versus Injected Facts as Variables

ansible_facts                               Injected Variable
--------------------------------------------------------------
ansible_facts['hostname']                  ansible_hostname
ansible_facts['distribution']              ansible_distribution
ansible_facts['default_ipv4']['address']   ansible_default_ipv4['address']
ansible_facts['interfaces']                ansible_interfaces
ansible_facts['devices']                   ansible_devices
ansible_facts['devices']['sda']\
['partitions']['sda1']['size']             ansible_devices['sda']['partitions']['sda1']['size']
ansible_facts['distribution_version']      ansible_distribution_version

Different notations can be used in either method, the listings address the facts in dotted notation, not in the notation with square brackets.

Addressing Facts with Injected Variables:

    - hosts: all
      tasks:
      - name: show IP address
        debug:
          msg: >
            This host uses IP address {{ ansible_default_ipv4.address }}

Addressing Facts Using the ansible_facts Variable

    ---
    - hosts: all
      tasks:
      - name: show IP address
        debug:
          msg: >
            This host uses IP address {{ ansible_facts.default_ipv4.address }}

If, for some reason, you want the method where facts are injected into variables to be the default method, you can use inject_facts_as_vars=true in the [default] section of the ansible.cfg file.

• In Ansible versions since 2.5, all facts are stored in one variable: ansible_facts. This method is used while gathering facts from a playbook.

• Before Ansible version 2.5, facts were injected into variables such as ansible_hostname. This method is used by the setup module. (Note that this may change in future versions of Ansible.)

• Facts can be addressed in dotted notation: {{ansible_facts.default_ipv4.address }}

• Alternatively, facts can be addressed in square brackets notation: {{ ansible_facts['default_ipv4']['address'] }}. (preferred)

Managing Fact Gathering

By default, upon execution of each playbook, facts are gathered. This does slow down playbooks, and for that reason, it is possible to disable fact gathering completely. To do so, you can use the gather_facts: no parameter in the play header. If later in the same playbook it is necessary to gather facts, you can do this by running the setup module in a task.

Even if it is possible to disable fact gathering for all of your Ansible configuration, this practice is not recommended. Too many playbooks use conditionals that are based on the current value of facts, and all of these conditionals would stop working if fact gathering were disabled altogether.

As an alternative to make working with facts more efficient, you can disable a fact cache. To do so, you need to install an external plug-in. Currently, two plug-ins are available for this purpose: jsonfile and redis. To configure fact caching using the redis plug-in, you need to install it first. Next, you can enable fact caching through ansible.cfg.

The following procedure describes how to do this:

1. Use yum install redis.

2. Use service redis start.

3. Use pip install redis.

4. Edit /etc/ansible/ansible.cfg and ensure it contains the following parameters:

[defaults]
gathering = smart
fact_caching = redis
fact_caching_timeout = 86400

Note

Fact caching can be convenient but should be used with caution. If, for instance, a playbook installs a certain package only if a sufficient amount of disk space is available, it should not do this based on information that may be up to 24 hours old. For that reason, using a fact cache is not recommended in many situations.

Custom Facts

  • Used to provide a host with arbitrary values that Ansible can use to change the behavior of plays.

  • can be provided as static files.

  • files must

    • be in either INI or JSON format,
    • have the extension .fact, and
    • on the managed hosts must be stored in the /etc/ansible/facts.d directory.
  • can be generated by a script, and

    • in that case the only requirement is that the script must generate its output in JSON format.

Dynamic custom facts are useful because they allow the facts to be determined at the moment that a script is running. provides an example of a static custom fact file.

Custom Facts Sample File:

    [packages]
    web_package = httpd
    ftp_package = vsftpd
    
    [services]
    web_service = httpd
    ftp_service = vsftpd

To get the custom facts files on the managed hosts, you can use a playbook that copies a local custom fact file (existing in the current Ansible project directory) to the appropriate location on the managed hosts. Notice that this playbook uses variables, which are explained in more detail in the section titled “Working with Variables.”

    ---
    - name: Install custom facts
      hosts: all
      vars:
        remote_dir: /etc/ansible/facts.d
        facts_file: listing68.fact
      tasks:
      - name: create remote directory
        file:
          state: directory
          recurse: yes
          path: "{{ remote_dir }}"
      - name: install new facts
        copy:
          src: "{{ facts_file }}"
          dest: "{{ remote_dir }}"

Custom facts are stored in the variable ansible_facts.ansible_local. In this variable, you use the filename of the custom fact file and the label in the custom fact file. For instance, after you run the playbook in Listing 6-9, the web_package fact that was defined in listing68.fact is accessible as

{{ ansible_facts[’ansible_local’][’listing67’][’packages’][’web_package’] }}

To verify, you can use the setup module with the filter argument. Notice that because the setup module produces injected variables as a result, the ad hoc command to use is ansible all -m setup -a "filter=ansible_local" . The command ansible all -m setup -a "filter=ansible_facts\['ansible_local'\]" does not work.

Lab Working with Ansible Facts

1. Create a custom fact file with the name custom.fact and the following contents:

[software]
package = httpd
service = httpd
state = started
enabled = true

2. Write a playbook with the name copy_facts.yaml and the following contents:

---
- name: copy custom facts
  become: yes
  hosts: ansible1
  tasks:
  - name: create the custom facts directory
    file:
      state: directory
      recurse: yes
      path: /etc/ansible/facts.d
  - name: copy the custom facts
    copy:
      src: custom.fact
      dest: /etc/ansible/facts.d

3. Apply the playbook using ansible-playbook copy_facts.yaml -i inventory

4. Check the availability of the custom facts by using ansible all -m setup -a "filter=ansible_local" -i inventory

5. Use an ad hoc command to ensure that the httpd service is not installed on any of the managed servers: ansible all -m yum -a "name=httpd state=absent" -i inventory -b

6. Create a playbook with the name setup_with_facts.yaml that installs and enables the httpd service, using the custom facts:

---
- name: install and start the web service
  hosts: ansible1
  tasks:
  - name: install the package
    yum:
      name: "{{ ansible_facts['ansible_local']['custom']['software']['package'] }}"
      state: latest
  - name: start the service
    service:
      name: "{{ ansible_facts['ansible_local']['custom']['software']['service'] }}"
      state: "{{ ansible_facts['ansible_local']['custom']['software']['state'] }}"
      enabled: "{{ ansible_facts['ansible_local']['custom']['software']['enabled'] }}"

7. Run the playbook to install and set up the service by using ansible-playbook setup_with_facts.yaml -i inventory -b

8. Use an ad hoc command to verify the service is running: ansible ansible1 -a "systemctl status httpd" -i inventory -b

Ansible Galaxy Roles

Using Ansible Galaxy Roles

  • Ansible Galaxy is a public library of Ansible content and contains thousands of roles that have been provided by community members.

Working with Galaxy

The easiest way to work with Ansible Galaxy is to use the website at https://galaxy.ansible.com:

Image Image

Image Image

  • Use the Search Feature to Search for Specific Packages

  • In the result of any Search action, you see a list of collections as well as a list of roles.

  • An Ansible Galaxy collection is a distribution format for Ansible content.

  • It can contain roles, but also playbooks, modules, and plug-ins.

  • In most cases you just need the roles, not the collection: roles contain all that you include in the playbooks you’re working with.

  • Some important indicators are the number of times the role has been downloaded and the score of the role.

  • This information enables you to easily distinguish between commonly used roles and roles that are not used that often.

  • Also, you can use tags to make identifying Galaxy roles easier.

  • These tags provide more information about a role and make it possible to search for roles in a more efficient way.

Image Image

  • You can download roles directly from the Ansible Galaxy website
  • You can also use the ansible-galaxy command

Using the ansible-galaxy Command

ansible-galaxy search

  • Find roles based on many different keywords and manage them.
  • Must provide a string as an argument.
  • Ansible searches for this string in the name and description of the roles.

Useful Command-Line Options: –platforms

  • Operating system platform to search for –author
  • GitHub username of the author –galaxy-tags
  • Additional tags to filter by

`ansible-galaxy info

  • Get more information about a role.
[ansible@control ansible-lab]$ ansible-galaxy info geerlingguy.docker

Role: geerlingguy.docker
        description: Docker for Linux.
        commit: 9115e969c1e57a1639160d9af3477f09734c94ac
        commit_message: Merge pull request #501 from adamus1red/adamus1red/alpine-compose

add compose package to Alpine specific variables
        created: 2023-05-08T20:49:45.679874Z
        download_count: 23592264
        github_branch: master
        github_repo: ansible-role-docker
        github_user: geerlingguy
        id: 10923
        imported: 2025-03-24T00:01:45.901567
        modified: 2025-03-24T00:01:47.840887Z
        path: ('/home/ansible/.ansible/roles', '/usr/share/ansible/roles', '/etc/ansible/roles')
        upstream_id: None
        username: geerlingguy

Managing Ansible Galaxy Roles

ansible-galaxy install

  • Install a role
  • normally installs the role into the ~/.ansible/roles directory because this role is specified in the role_path setting in ansible.cfg.
  • If you want roles to be installed in another directory, consider changing this parameter. -p
  • option to install the role to a different role path directory.

requirements file.

  • YAML file that you can include when using the ansible-roles command.
    - src: geerlingguy.nginx
      version: "2.7.0"
  • possible to add roles from sources other than Ansible Galaxy, such as a Git repository or a tarball.
  • In that case, you must specify the exact URL to the role using the src option.
  • When you are installing roles from a Git repository, the scm keyword is also required and must be set to git.

To install a role using the requirements file, you can use the -r option with the ansible-galaxy install command: ansible-galaxy install -r roles/requirements.yml

ansible-galaxy list

  • Get a list of currently installed roles

ansible-galaxy remove

  • Remove roles from your system.

LAB: Using ansible-galaxy to Manage Roles

  • Type ansible-galaxy search --author geerlingguy --platforms EL to see a list of roles that geerlingguy has created.
  • Make the command more specific and type ansible-galaxy search nginx --author geerlingguy --platforms EL to find the geerlingguy.nginx role.
  • Request more information about this role by using ansible-galaxy info geerlingguy.nginx.
  • Create a requirements file with the name listing96.yaml and give this file the following contents:
- src: geerlingguy.nginx
  version: "2.7.0"
  • Add the line roles_path = /home/ansible/roles to the ansible.cfg file.

  • Use the command ansible-galaxy install -r listing96.yaml to install the role from the requirements file. It is possible that by the time you run this exercise, the specified version 2.7.0 is no longer available. If that is the case, use ansible-galaxy info again to find a version that still is available, and change the requirements file accordingly.

  • Type ansible-galaxy list to verify that the new role was successfully installed on your system.

  • Write a playbook with the name exercise92.yaml that uses the role and has the following contents:

---
- name: install nginx using Galaxy role
  hosts: ansible2
  roles:
  - geerlingguy.nginx
  • Run the playbook using ansible-playbook exercise92.yaml and observe that the new role is installed from the custom roles path.

Ansible Inventory and Ansible.cfg

Ansible projects

For small companies, you can use a single Ansible configuration. But for larger ones, it’s a good idea to use different project directories. A project directory contains everything you need to work on a single project. Including:

  • playbooks
  • variable files
  • task files
  • inventory files
  • ansible.cfg

playbook An Ansible script written in YAML that enforce the desired configuration on manage hosts.

Inventory

A file that Identifies hosts that Ansible has to manage. You can also use this to list and group hosts and specify host variables. Each project should have it’s own inventory file.

/etc/ansible/hosts

  • can be used for system wide inventory.
  • default if no inventory file is specified.
  • has some basic inventory formatting info if you forget)
  • Ansible will also target localhosts if no hosts are found in the inventory file.
  • It’s a good idea to store inventory files in large environments in their own project folders.

localhost is not defined in inventory. It is an implicit host that is usable and refers to the Ansible control machine. Using localhost can be a good way to verify the accessibility of services on managed hosts.

Listing hosts

List hosts by IP address or hostname. You can list a range of hosts in an inventory file as well such as web-server[1:10].example.com

ansible1:2222 < specify ssh port if the host is not using the default port 22
ansible2
10.0.10.55
web-server[1:10].example.com

Listing groups

You can list groups and groups of groups. See the groups web and db are included in the group “servers:children”

ansible1
ansible2
10.0.10.55
web-server[1:10].example.com

[web]
web-server[1:10].example.com

[db]
db1
db2

[servers:children] <-- servers is the group of groups and children is the parameter that specifies child groups
web
db

There are 3 general approaches to using groups:

Functional groups Address a specific group of hosts according to use. Such as web servers or database servers.

Regional host groups Used when working with region oriented infrastructure. Such as USA, Canada.

Staging host groups Used to address different hosts according to the staging phase that the current environment is in. Such as testing, development, production.

Undefined host groups are called implicit host groups. These are all, ungrouped, and localhost. Names making the meaning obvious.

Host variables

In older versions of Ansible you could define variables for hosts. This is no longer used. Example:

[groupname:vars]
ansible=ansible_user

Variables are now set using host_vars and group_vars directories instead.

Multiple inventory files

Put all inventory files in a directory and specify the directory as the inventory to be used. For dynamic directories you also need to set the execution bit on the inventory file.

Ansible Playbooks

  • Exploring playbooks
  • YAML
  • Managing Multiplay Playbooks

Lets create our first playbook:

[ansible@control base]$ vim playbook.yaml

---
- name: install start and enable httpd <-- play is at the highest level
  hosts: all
  tasks: <-- play has a list of tasks
  - name: install package <-- name of task 1
    yum: <-- module
      name: httpd <-- argument 1
      state: installed <-- argument 2
  - name: start and enable service <-- task 2
    service:
      name: httpd
      state: started
      enabled: yes

There are thee dashes at the top of the playbook. And sometimes you’ll find three dots at the end of a playbook. These make it easy to isolate the playbook and embed the playbook code into other projects.

Playbooks are written in YAML format and saved as either .yml or .yaml. YAML specifies objects as key-value pairs (dictionaries). Key value pairs can be listed in either key: value (preferred) or key=value. And dashes specify lists of embedded objects.

There is a collection of one or more plays in a playbook. Each play targets specific hosts and lists tasks to perform on those hosts. There is one play here with the name “install start and enable httpd”. You target the host names to target at the top of the play, not in the individual tasks performed.

Each task is identified by “- name” (not required but recommended for troubleshooting and identifying tasks). Then the module is listed with arguments and their values under that.

Indentation is important here. It identifies the relationships between different elements. Data elements at the same level must have the same indentation. And items that are children or properties of another element must be indented more than their parent elements.

Indentation is created using spaces. Usually two spaces is used, but not required. You cannot use tabs for indentation.

You can also edit your .vimrc file to help with indentation when it detects that you are working with a YAML file: vim ~/.vimrc

autocmd FileType yaml setlocal ai ts=2 sw=2 et

Required elements:

  • hosts - name of host(s) to perform play on
  • name - name of the play
  • tasks - one or more tasks to execute for this play

To run a playbook:

[ansible@control base]$ ansible-playbook playbook.yaml 

# Name of the play
PLAY [install start and enable http+userd] ***********************************************

# Overview of tasks and the hosts it was successful on
TASK [Gathering Facts] **************************************************************
fatal: [web1]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname web1: Name or service not known", "unreachable": true}
fatal: [web2]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname web2: Name or service not known", "unreachable": true}
ok: [ansible1]
ok: [ansible2]

TASK [install package] **************************************************************
ok: [ansible1]
ok: [ansible2]

TASK [start and enable service] *****************************************************
ok: [ansible2]
ok: [ansible1]

# overview of the status of each task
PLAY RECAP **************************************************************************
ansible1                   : ok=3 (no changes required)    changed=0 (indicates the task was successful and target node was modified.)   unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
ansible2                   : ok=3    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
web1                       : ok=0    changed=0    unreachable=1    failed=0    skipped=0    rescued=0    ignored=0   
web2                       : ok=0    changed=0    unreachable=1    failed=0    skipped=0    rescued=0    ignored=0   

Before running tasks, the ansible-playbook command gathers facts (current configuration and settings) about managed nodes.

How to undo playbook modifications

Ansible does not have a built in feature to undo a playbook that you ran. So to undo changes, you need to make another playbook that defines the new desired state of the host.

Working with YAML

Key value pairs can also be listed as:

tasks:
 - name: install vsftpd
   yum: name=vsftpd
 - name: enable vsftpd
   service: name=vsftpd enabled=true
 - name: create readme file

But better to list them as such for better readability:

    copy:
      content: "welcome to the FTP server\n"
      dest: /var/ftp/pub/README
      force: no
      mode: 0444

Some modules support multiple values for a single key:

---
- name: install multiple packages
  hosts: all
  tasks:
  - name: install packages
    yum:
      name: <-- key with multiple values
      - nmap 
      - httpd
      - vsftpd
      state: latest <-- will install and/or update to latest version

YAML Strings

Valid fomats for a string in YAML:

  • super string
  • "super string"
  • 'super string'

When inserting text into a file, you may have to deal with spacing. You can either preserve newline characters with a pipe | such as:

    - name: Using | to preserve newlines
      copy:
        dest: /tmp/rendezvous-with-death.txt
        content: |
          I have a rendezvous with Death
          At some disputed barricade,
          When Spring comes back with rustling shade
          And apple-blossoms fill the air—

Output:

I have a rendezvous with Death
At some disputed barricade,
When Spring comes back with rustling shade
And apple-blossoms fill the air—

Or chose not to with a carrot >

 - name: Using > to fold lines into one
      copy:
        dest: /tmp/rendezvous-with-death.txt
        content: >
          I have a rendezvous with Death
          At some disputed barricade,
          When Spring comes back with rustling shade
          And apple-blossoms fill the air—

Output:

I have a rendezvous with Death At some disputed barricade, When Spring comes back with rustling shade And apple-blossoms fill the air—

Checking syntax with --syntax-check

You can use the --syntax-check flag to check a playbook for errors. The ansible-playbook command does check syntax by default though, and will throw the same error messages. The syntax check stops after detecting a single error. So you will need to fix the first errors in order to see errors further in the file. I’ve added a tab in front of the host key to demonstrate:

[ansible@control base]$ cat playbook.yaml 
---
- name: install start and enable httpd
    hosts: all
  tasks:
  - name: install package
    yum:
      name: httpd
      state: installed
  - name: start and enable service
    service:
      name: httpd
      state: started
      enabled: yes
      
[ansible@control base]$ ansible-playbook --syntax-check playbook.yaml 
ERROR! We were unable to read either as JSON nor YAML, these are the errors we got from each:
JSON: Expecting value: line 1 column 1 (char 0)

Syntax Error while loading YAML.
  mapping values are not allowed in this context

The error appears to be in '/home/ansible/base/playbook.yaml': line 3, column 10, but may
be elsewhere in the file depending on the exact syntax problem.

The offending line appears to be:

- name: install start and enable httpd
    hosts: all
         ^ here

And here it is again, after fixing the syntax error:

[ansible@control base]$ vim playbook.yaml 
[ansible@control base]$ cat playbook.yaml 
---
- name: install start and enable httpd
  hosts: all
  tasks:
  - name: install package
    yum:
      name: httpd
      state: installed
  - name: start and enable service
    service:
      name: httpd
      state: started
      enabled: yes
[ansible@control base]$ ansible-playbook --syntax-check playbook.yaml 

playbook: playbook.yaml

Doing a dry run

Use the -C flag to perform a dry run. This will check the success status of all of the tasks without actually making any changes. ansible-playbook -C playbook.yaml

Multiple play playbooks

Using multiple plays in a playbook lets you set up one group of servers with one configuration and another group with a different configuration. Each play has it’s own list of hosts to address.

You can also specify different parameters in each play such as become: or the remote_user: parameters.

Try to keep playbooks small. As bigger playbooks will be harder to troubleshoot. You can use include: to include other playbooks. Other than troubleshooting, using smaller playbooks lets you use your playbooks in a flexible way to perform a wider range of tasks.

Here is an example of a playbook with two plays:

---
- name: install start and enable httpd   <--- play 1
  hosts: all
  tasks:
  - name: install package
    yum:
      name: httpd
      state: installed
  - name: start and enable service
    service:
      name: httpd
      state: started
      enabled: yes

- name: test httpd accessibility <-- play 2
  hosts: localhost
  tasks:
  - name: test httpd access
    uri:
      url: http://ansible1

Verbose output options

You can increase the output of verbosity to an amount hitherto undreamt of. This can be useful for troubleshooting.

Verbose output of the playbook above showing task results: [ansible@control base]$ ansible-playbook -v playbook.yaml

Verbose output of the playbook above showing task results and task configuration: [ansible@control base]$ ansible-playbook -vv playbook.yaml

Verbose output of the playbook above showing task results, task configuration, and info about connections to managed hosts: [ansible@control base]$ ansible-playbook -vvv playbook.yaml

Verbose output of the playbook above showing task results, task configuration, and info about connections to managed hosts, plug-ins, user accounts, and executed scripts: [ansible@control base]$ ansible-playbook -vvvv playbook.yaml

Lab playbook

Now we know enough to create and enable a simple webserver. Here is a playbook example. Just make sure to download the posix collection or you won’t be able to use the firewalld module: [ansible@control base]$ ansible-galaxy collection install ansible.posix

[ansible@control base]$ cat playbook.yaml 
---
- name: Enable web server 
  hosts: ansible1
  tasks:
  - name: install package
    yum:
      name: 
        - httpd
        - firewalld
      state: installed
  - name: Create welcome page
    copy:
      content: "Welcome to the webserver!\n"
      dest: /var/www/html/index.html
  - name: start and enable service
    service:
      name: httpd
      state: started
      enabled: yes
  - name: enable firewall
    service: 
      name: firewalld
      state: started
      enabled: true
  - name: Open service in firewall
    firewalld:
      service: http
      permanent: true
      state: enabled
      immediate: yes

- name: test webserver accessibility
  hosts: localhost
  become: no
  tasks:
  - name: test webserver access
    uri:
      url: http://ansible1
      return_content: yes <-- Return the body of the response as a content key in the dictionary result
      status_code: 200 <--

After running this playbook, you should be able to reach the webserver at http://ansible1

With return content and status code

ok: [localhost] => {"accept_ranges": "bytes", "changed": false, "connection": "close", "content": "Welcome to the webserver!\n", "content_length": "26", "content_type": "text/html; charset=UTF-8", "cookies": {}, "cookies_string": "", "date": "Thu, 10 Apr 2025 12:12:37 GMT", "elapsed": 0, "etag": "\"1a-6326b4cfb4042\"", "last_modified": "Thu, 10 Apr 2025 11:58:14 GMT", "msg": "OK (26 bytes)", "redirected": false, "server": "Apache/2.4.62 (Red Hat Enterprise Linux)", "status": 200, "url": "http://ansible1"}

Adds this: "content": "Welcome to the webserver!\n" and this: "status": 200, "url": "http://ansible1"} to verbose output for that task.

Ansible Roles

Work with roles and Create roles

Using Ansible Roles

  • Ready-to-use playbook-based Ansible solutions that you can easily include in your own playbooks.
  • Community roles are provided through Ansible Galaxy
  • Also possible to create your own roles.
  • Red Hat provides RHEL System Roles.
  • Roles make it possible to provide Ansible code in a reusable way.
  • You can easily define a specific task in a role, and after defining it in a role, you can easily redistribute that and ensure that tasks are handled the same way, no matter where they are executed.
  • Roles can be custom-made for specific environments, or default roles provided from Ansible Galaxy can be used.

Understanding Ansible Roles

  • work with include files.
  • All the different components that you may use in a playbook are used in roles and stored in separate directories.
  • While defining the role, you don’t need to tell the role that it should look in some of these specific directories; it does that automatically.
  • The only thing you need to do is tell your Ansible playbook that it should include a role.
  • Different components of the role are stored in different subdirectories.

Roles Sample Directory Structure:

    [ansible@control roles]$ tree testrole/
    testrole/
    |-- defaults
    |   `-- main.yml
    |-- files
    |-- handlers
    |   `-- main.yml
    |-- meta
    |   `-- main.yml
    |-- README.md
    |-- tasks
    |   `-- main.yml
    |-- templates
    |-- tests
    |   |-- inventory
    |   `-- test.yml
    `-- vars
        `-- main.yml

Role Directory Structure defaults

  • Default variables that may be overwritten by other variables files

  • Static files that are needed by role tasks handlers

  • Handlers for use in this role meta

  • metadata, such as dependencies, plus license and maintainer information tasks

  • Role task definitions templates

  • Jinja2 templates tests

  • Optional inventory and a test.yml file to test the role vars

  • Variables that are not meant to be overwritten

  • Most of the role directories have a main.yml file.

  • This is the entry-point YAML file that is used to define components in the role.

Understanding Role Location

Roles can be stored in different locations:

./roles

  • store roles in the current project directory.
  • highest precedence.

~/.ansible/roles

  • exists in the current user home directory and makes the role available to the current user only.
  • second-highest precedence.

/etc/ansible/roles

  • Where roles are stored to make them accessible to any user.

/usr/share/ansible/roles

  • Where roles are stored after they are installed from RPM files.
  • lowest precedence
  • should not be used for storing custom-made roles.

ansible-galaxy init { newrolename }

  • create a custom role
  • creates the default role directory structure with a main.yml file
  • includes sample files

Using Roles from Playbooks

  • Call roles in a playbook the same way you call a task
  • Roles are included as a list.
    ---
    - name: include some roles
      roles:
      - role1
      - role2
  • Roles are executed before the tasks.
  • In specific cases you might have to execute tasks before the roles. To do so, you can specify these tasks in a pre_tasks section.
  • Also, it’s possible to use the post_tasks section to include tasks that will be executed after the roles, but also after tasks specified in the playbook as well as the handlers they call.

Creating Custom Roles

  • Use mkdir roles to create a roles subdirectory in the current directory, and use cd roles to get into that subdirectory.
  • Use ansible-galaxy init motd to create the motd role structure.
  • Add contents to motd/tasks/main.yml
  • Add contents to motd/templates/motd.j2
  • Add contents to motd/defaults/main.yml
  • Add contents to motd/meta/main.yml
  • Create the playbook exercise91.yaml to run the role
  • Run the playbook by using ansible-playbook exercise91.yaml
  • Verify that modifications have been applied correctly by using the ad hoc command ansible ansible2 -a "cat /etc/motd"

Sample role all under roles/motd/:

defaults/main.yml

---
# defaults file for motd
system_manager: anna@example.com

meta/main.yml

galaxy_info:
author: Sander van V
description: your description
company: your company (optional)
license: license (GPLv2, CC-BY, etc)
min_ansible_version: 2.5

tasks/main.yml

---
tasks file for motd
- name: copy motd file
  template:
    src: templates/motd.j2
    dest: /etc/motd
    owner: root
    group: root
    mode: 0444

templates/motd.j2

Welcome to {{ ansible_hostname }}
    
This file was created on {{ ansible_date_time.date }}
Disconnect if you have no business being here
    
Contact {{ system_manager }} if anything is wrong

Playbook motd.yml:

---
- name: use the motd role playbook
  hosts: ansible2
  roles:
  - role: motd
    system_manager: bob@example.com

handlers/main.yml example:

---
# handlers file for base-config
  - name: source profile
    command: source /etc/profile

  - name: source bash
    command: source /etc/bash.bashrc                               

Managing Role Dependencies

  • Roles may use other roles as a dependency.
  • You can put role dependencies in meta/main.yml
  • Dependent roles are always executed before the roles that depend on them.
  • Dependent roles are executed once.
  • When two roles that are used in a playbook call the same dependency, the dependent role is executed once only.
  • When calling dependent roles, it is possible to pass variables to the dependent role.
  • You can define a when statement to ensure that the dependent role is executed only in specific situations.

Defining dependencies in meta/main.yml

    dependencies:
    - role: apache
      port: 8080
    - role: mariabd
      when: environment == ’production’

Understanding File Organization Best Practices

  • Working with roles splits the contents of the role off the tasks that are run through the playbook.

  • Splitting files to store them in a location that makes sense is common in Ansible

  • When you’re working with Ansible, it’s a good idea to work with project directories in bigger environments.

  • Working with project directories makes it easier to delegate tasks and have the right people responsible for the right things.

  • Each project directory may have its own ansible.cfg file, inventory file, and playbooks.

  • If the project grows bigger, variable files and other include files may be used, and they are normally stored in subdirectories.

  • At the top-level directory, create the main playbook from which other playbooks are included. The suggested name for the main playbook is site.yml.

  • Use group_vars/ and host_vars/ to set host-related variables and do not define them in inventory.

  • Consider using different inventory files to differentiate between production and staging phases.

  • Use roles to standardize common tasks.

When you are working with roles, some additional recommendations apply:

  • Use a version control repository to maintain roles in a consistent way. Git is commonly used for this purpose.

  • Sensitive information should never be included in roles. Use Ansible Vault to store sensitive information in an encrypted way.

  • Use ansible-galaxy init to create the role base structure. Remove files and directories you don’t use.

  • Don’t forget to provide additional information in the role’s README.md and meta/main.yml files.

  • Keep roles focused on a specific function. It is better to use multiple roles to perform multiple tasks.

  • Try to develop roles in a generic way, such that they can be used for multiple purposes.

Lab 9-1

Create a playbook that starts the Nginx web server on ansible1, according to the following requirements: • A requirements file must be used to install the Nginx web server. Do NOT use the latest version of the Galaxy role, but instead use the version before that. • The same requirements file must also be used to install the latest version of postgresql. • The playbook needs to ensure that neither httpd nor mysql is currently installed.

Lab 9-2

Use the RHEL SELinux System Role to manage SELinux properties according to the following requirements:

• A Boolean is set to allow SELinux relabeling to be automated using cron. • The directory /var/ftp/uploads is created, permissions are set to 777, and the context label is set to public_content_rw_t. • SELinux should allow web servers to use port 82 instead of port 80. • SELinux is in enforcing state. Subjects: ansible-playbook timesync.yaml to run the playbook. Observe its output. Notice that some messages in red are shown, but these can safely be ignored.

5. Use ansible ansible2 -a "timedatectl show" and notice that the timezone variable is set to UTC.

Lab 9-1

Create a playbook that starts the Nginx web server on ansible1, according to the following requirements: • A requirements file must be used to install the Nginx web server. Do NOT use the latest version of the Galaxy role, but instead use the version before that. • The same requirements file must also be used to install the latest version of postgresql. ansible-galaxy install -r roles/requirements.yml

cat roles/requirements.yml

- src: geerlingguy.nginx
  version: "3.1.4"

- src: geerlingguy.postgresql

• The playbook needs to ensure that neither httpd nor mysql is currently installed.

---
- name: ensure conflicting packages are not installed
  hosts: web1
  tasks:
  - name: remove packages
    yum:
      name: 
      - mysql
      - httpd
      state: absent

- name: nginx web server
  hosts: web1
  roles: 
  - geerlingguy.nginx
  - geerlingguy.postgresql

(Had to add a variable file for redhat 10 into the role. )

Lab 9-2

Use the RHEL SELinux System Role to manage SELinux properties according to the following requirements:

• A Boolean is set to allow SELinux relabeling to be automated using cron. • The directory /var/ftp/uploads is created, permissions are set to 777, and the context label is set to public_content_rw_t. • SELinux should allow web servers to use port 82 instead of port 80. • SELinux is in enforcing state.

vim lab92.yml

---
- name: manage ftp selinux properties
  hosts: ftp1
  vars:
    selinux_booleans: 
      - name: cron_can_relabel
        state: true
        persistent: true
    selinux_state: enforcing
    selinux_ports:
    - ports: 82
      proto: tcp
      setype: http_port_t
      state: present
      local: true
 
  tasks:

  - name: create /var/ftp/uploads/
    file:
      path: /var/ftp/uploads
      state: directory
      mode: 777
  
  - name: set selinux context
    sefcontext:
      target: '/var/ftp/uploads(/.*)?'
      setype: public_content_rw_t
      ftype: d
      state: present
    notify: run restorecon

  - name: Execute the role and reboot in a rescue block
    block:
      - name: Include selinux role
        include_role:
          name: rhel-system-roles.selinux
    rescue:
      - name: >-
          Fail if failed for a different reason than selinux_reboot_required
        fail:
          msg: "role failed"
        when: not selinux_reboot_required

      - name: Restart managed host
        reboot:

      - name: Wait for managed host to come back
        wait_for_connection:
          delay: 10
          timeout: 300

      - name: Reapply the role
        include_role:
          name: rhel-system-roles.selinux
  
  handlers:
    - name:  run restorecon
      command: restorecon -v /var/ftp/uploads

Ansible Vault

Ansible Vault

  • For webkeys, passwords, and other types of sensitive data that you really shouldn’t store as plain text in a playbook.
  • Can use Ansible Vault to encrypt and decrypt sensitive data to make it unreadable, and only while accessing data does it ask for a password so that it is decrypted.

1. Sensitive data is stored as values in variables in a separate variable file. 2. The variable file is encrypted, using the ansible-vault command. 3. While accessing the variable file from a playbook, you enter a password to decrypt.

Managing Encrypted Files

ansible-vault create secret.yaml

  • Ansible Vault prompts for a password and then opens the file using the default editor.
  • The password can be provided in a password file.(must be really well protected (for example, by putting it in the user root home directory))
  • If a password file is used, the encrypted variable file can be created using ansible-vault create \--vault-password-file=passfile secret.yaml

ansible-vault encrypt

  • encrypt one or more existing files.
  • The encrypted file can next be used from a playbook, where a password needs to be entered to decrypt.

ansible-vault decrypt

  • used to decrypt the file.

Commonly used ansible-vault commands: create

  • Creates new encrypted file encrypt
  • Encrypts an existing file encrypt_string
  • Encrypts a string decrypt
  • Decrypts an existing file rekey
  • Changes password on an existing file view
  • Shows contents of an existing file edit
  • Edits an existing encrypted file

Using Vault in Playbooks

--vault-id @prompt

  • When a Vault-encrypted file is accessed from a playbook, a password must be entered.
  • Has the ansible-playbook command prompt for a password for each of the Vault-encrypted files that may be used
  • Enables a playbook to work with multiple Vault-encrypted files where these files are allowed to have different passwords set.

ansible-playbook --ask-vault-pass

  • Used if all Vault-encrypted files a playbook refers to have the same password set.

ansible-playbook --vault-password-file=secret

  • Obtain the Vault password from a password file.
  • Password file should contain a string that is stored as a single line in the file.
  • Make sure the vault password file is protected through file permissions, such that it is not accessible by unauthorized users!

Managing Files with Sensitive Variables

  • You should separate files containing unencrypted variables from files that contain encrypted variables.

  • Use group_vars and host_vars variable inclusion for this.

  • You may create a directory (instead of a file) with the name of the host or host group.

  • Within that directory you can create a file with the name vars, which contains unencrypted variables, and a file with the name vault, which contains Vault-encrypted variables.

  • Vault-encrypted variables can be included from a file using the vars_files parameter.

Lab: Working with Ansible Vault

1. Create a secret file containing encrypted values for a variable user and a variable password by using ansible-vault create secrets.yaml

Set the password to password and enter the following lines:

username: bob
pwhash: password

When creating users, you cannot provide the password in plain text; it needs to be provided as a hashed value. Because this exercise focuses on the use of Vault, the password is not provided as a hashed value, and as a result, a warning is displayed. You may ignore this warning.

2. Create the file create-users.yaml and provide the following contents:

---
- name: create a user with vaulted variables
  hosts: ansible1
  vars_files:
    - secrets.yaml
  tasks:
  - name: creating user
    user:
      name: "{{ username }}"
      password: "{{ pwhash }}"

3. Run the playbook by using ansible-playbook --ask-vault-pass create-users.yaml

4. Change the current password on secrets.yaml by using ansible-vault rekey secrets.yaml and set the new password to secretpassword.

5. To automate the process of entering the password, use echo secretpassword > vault-pass

6. Use chmod 400 vault-pass to ensure the file is readable for the ansible user only; this is about as much as you can do to secure the file.

7. Verify that it’s working by using ansible-playbook --vault-password-file=vault-pass create-users.yaml JunctionScallopPoise

Ansible-inventory command

Inventory commands:

To view the inventory, specify the inventory file such as ~/base/inventory in the command line. You can name the inventory file anything you want. You can also set the default in the ansible.cfg file.

View the current inventory: ansible -i inventory <pattern> --list-hosts

List inventory hosts in JSON format: ansible-inventory -i inventory --list

Display overview of hosts as a graph: ansible-inventory -i inventory --graph

In our lab example:

[ansible@control base]$ pwd
/home/ansible/base

[ansible@control base]$ ls
inventory

[ansible@control base]$ cat inventory
ansible1
ansible2

[web]
web1
web2

[ansible@control base]$ ansible-inventory -i inventory --graph
@all:
  |--@ungrouped:
  |  |--ansible1
  |  |--ansible2
  |--@web:
  |  |--web1
  |  |--web2

[ansible@control base]$ ansible-inventory -i inventory --list
{
    "_meta": {
        "hostvars": {}
    },
    "all": {
        "children": [
            "ungrouped",
            "web"
        ]
    },
    "ungrouped": {
        "hosts": [
            "ansible1",
            "ansible2"
        ]
    },
    "web": {
        "hosts": [
            "web1",
            "web2"
        ]
    }
}

[ansible@control base]$ ansible -i inventory all --list-hosts
  hosts (4):
    ansible1
    ansible2
    web1
    web2
    
[ansible@control base]$ ansible -i inventory ungrouped --list-hosts
  hosts (2):
    ansible1
    ansible2

Using the ansible-inventory Command

  • default output of a dynamic inventory script is unformatted.
  • To show formatted JSON output of the scripts, you can use the ansible-inventory command.
  • Apart from the --list and --host options, this command also uses the --graph option to show a list of hosts, including the host groups they are a member of.
    [ansible@control rhce8-book]$ ansible-inventory -i listing101.py --graph
    [WARNING]: A duplicate localhost-like entry was found (localhost). First found
    localhost was 127.0.0.1
    @all:
      |--@ungrouped:
      |  |--127.0.0.1
      |  |--192.168.4.200
      |  |--192.168.4.201
      |  |--192.168.4.202
      |  |--ansible1
      |  |--ansible1.example.com
      |  |--ansible2
      |  |--ansible2.example.com
      |  |--control
      |  |--control.example.com
      |  |--localhost
      |  |--localhost.localdomain
      |  |--localhost4
      |  |--localhost4.localdomain4
      |  |--localhost6
      |  |--localhost6.localdomain6

Ansible.cfg

ansible.cfg

You can store this in a project’s directory or a user’s home directory, in the case that multiple user’s want to have their own Ansible configuration. Or in /etc/ansible if the configuration will be the same for every user and every project. You can also specify these settings in Ansible playbooks. The settings in a playbook take precedence over the .cfg file.

ansible.cfg precedence (Ansible uses the first one it finds and ignores the rest.)

  1. ANSIBLE_CONFIG environment variable
  2. ansible.cfg in current directory
  3. ~/.ansible.cfg
  4. /etc/ansible/ansible.cfg

Generate an example config file in the current directory. All directive are commented out by default: [ansible@control base]$ ansible-config init --disabled > ansible.cfg

Include existing plugin to the file: ansible-config init --disabled -t all > ansible.cfg

This generates an extremely large file. So I’ll just show Van Vugt’s example in .ini format:

[defaults] <-- General information
remote_user = ansible <--Required
host_key_checking = false <-- Disable SSH host key validity check
inventory = inventory

[privilege_escalation] <-- Define how ansible user requires admin rights to connect to hosts
become = True <-- Escalation required
become_method = sudo
become_user = root <-- Escalated user
become_ask_pass = False <-- Do not ask for escalation password

Privilege escalation parameters can be specified in ansible.cfg, playbooks, and on the command line.

Boot Process

Managing the Boot Process

No modules for managing boot process.

file module

  • manage the systemd boot targets

lineinfile module

  • manage the GRUB configuration.

reboot module

  • enables you to reboot a host and pick up after the reboot at the exact same location.

Managing Systemd Targets

To manage the default systemd target:

  • /etc/systemd/system/default.target file must exist as a symbolic link to the desired default target.
ls -l /etc/systemd/system/default.target
    lrwxrwxrwx. 1 root root 37 Mar 23 05:33 /etc/systemd/system/default.target -> /lib/systemd/system/multi-user.target
---
- name: set default boot target
    hosts: ansible2
    tasks:
    - name: set boot target to graphical
      file:            
        src: /usr/lib/systemd/system/graphical.target
        dest: /etc/systemd/system/default.target
        state: link

Rebooting Managed Hosts

reboot module.

  • Restart managed nodes.

test_command argument

  • Verify the renewed availability of the managed hosts
  • Specifies an arbitrary command that Ansible should run successfully on the managed hosts after the reboot. The success of this command indicates that the rebooted host is available again.

Equally useful while using the reboot module are the arguments that relate to timeouts. The reboot module uses no fewer than four of them:

connect_timeout: The maximum seconds to wait for a successful connection before trying again

post_reboot_delay: The number of seconds to wait after the reboot command before trying to validate the managed host is available again

pre_reboot_delay: The number of seconds to wait before actually issuing the reboot

reboot_timeout: The maximum seconds to wait for the rebooted machine to respond to the test command

When the rebooted host is back, the current playbook continues its tasks. This scenario is shown in the example in Listing 14-7, where first all managed hosts are rebooted, and after a successful reboot is issued, the message “successfully rebooted” is shown. Listing 14-8 shows the result of running this playbook. In Exercise 14-2 you can practice rebooting hosts using the reboot module.

Listing 14-7 Rebooting Managed Hosts

::: pre_1 — - name: reboot all hosts hosts: all gather_facts: no tasks: - name: reboot hosts reboot: msg: reboot initiated by Ansible test_command: whoami - name: print message to show host is back debug: msg: successfully rebooted :::

Listing 14-8 Verifying the Success of the reboot Module

::: pre_1 [ansible@control rhce8-book]$ ansible-playbook listing147.yaml

PLAY [reboot all hosts] *************************************************************************************************

TASK [reboot hosts] *****************************************************************************************************
changed: [ansible2]
changed: [ansible1]
changed: [ansible3]
changed: [ansible4]
changed: [ansible5]

TASK [print message to show host is back] *******************************************************************************
ok: [ansible1] => {
    "msg": "successfully rebooted"
}
ok: [ansible2] => {
    "msg": "successfully rebooted"
}
ok: [ansible3] => {
    "msg": "successfully rebooted"
}
ok: [ansible4] => {
    "msg": "successfully rebooted"
}
ok: [ansible5] => {
    "msg": "successfully rebooted"
}

PLAY RECAP **************************************************************************************************************
ansible1                   : ok=2    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
ansible2                   : ok=2    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
ansible3                   : ok=2    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
ansible4                   : ok=2    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
ansible5                   : ok=2    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

:::

::: box Exercise 14-2 Managing Boot State

1. As a preparation for this playbook, so that it actually changes the default boot target on the managed host, use ansible ansible2 -m file -a “state=link src=/usr/lib/systemd/system/graphical.target dest=/etc/systemd/system/default.target”.

2. Use your editor to create the file exercise142.yaml and write the following playbook header:

---
- name: set default boot target and reboot
  hosts: ansible2
  tasks:

3. Now you set the default boot target to multi-user.target. Add the following task to do so:

- name: set default boot target
  file:
    src: /usr/lib/systemd/system/multi-user.target
    dest: /etc/systemd/system/default.target
    state: link

4. Complete the playbook to reboot the managed hosts by including the following tasks:

- name: reboot hosts
  reboot:
    msg: reboot initiated by Ansible
    test_command: whoami
- name: print message to show host is back
  debug:
    msg: successfully rebooted

5. Run the playbook by using ansible-playbook exercise142.yaml.

6. Test that the reboot was issued successfully by using ansible ansible2 -a “systemctl get-default”. :::

Building an Ansible lab with Ansible

When I started studying for RHCE, the study guide had me manually set up virtual machines for the Ansible lab environment. I thought.. Why not start my automation journey right, and automate them using Vagrant.

I use Libvirt to manage KVM/QEMU Virtual Machines and the Virt-Manager app to set them up. I figured I could use Vagrant to automatically build this lab from a file. And I got part of the way. I ended up with this Vagrant file:

Vagrant.configure("2") do |config|
  config.vm.box = "almalinux/9"

  config.vm.provider :libvirt do |libvirt|
    libvirt.uri = "qemu:///system"
    libvirt.cpus = 2
    libvirt.memory = 2048
  end

   config.vm.define "control" do |control|
    control.vm.network "private_network", ip: "192.168.124.200"
    control.vm.hostname = "control.example.com"
  end

  config.vm.define "ansible1" do |ansible1|
    ansible1.vm.network "private_network", ip: "192.168.124.201"
    ansible1.vm.hostname = "ansible1.example.com"

  end

  config.vm.define "ansible2" do |ansible2|
    ansible2.vm.network "private_network", ip: "192.168.124.202"
    ansible2.vm.hostname = "ansible2.example.com"
  end

end

I could run this Vagrant file and Build and destroy the lab in seconds. But there was a problem. The Libvirt plugin, or Vagrant itself, I’m not sure which, kept me from doing a couple important things.

First, I could not specify the initial disk creation size. I could add additional disks of varying sizes but, if I wanted to change the size of the first disk, I would have to go back in after the fact and expand it manually…

Second, the Libvirt plugin networking settings were a bit confusing. When you add the private network option as seen in the Vagrant file, it would add this as a secondary connection, and route everything through a different public connection.

Now I couldn’t get the VMs to run using the public connection for whatever reason, and it seems the only workaround was to make DHCP reservations for the guests Mac addresses which gave me even more problems to solve. But I won’t go there..

So why not get my feet wet and learn how to deploy VMs with Ansible? This way, I would get the granularity and control that Ansible gives me, some extra practice with Ansible, and not having to use software that has just enough abstraction to get in the way.

The guide I followed to set this up can be found on Redhat’s blog here. And it was pretty easy to set up all things considered.

I’ll rehash the steps here:

  1. Download a cloud image
  2. Customize the image
  3. Install and start a VM
  4. Access the VM

Creating the role

Move to roles directory cd roles

Initialize the role ansible-galaxy role init kvm_provision

Switch into the role directory cd kvm_provision/

Remove unused directories rm -r files handlers vars

Define variables

Add default variables to main.yml cd defaults/ && vim main.yml

---
# defaults file for kvm_provision
base_image_name: AlmaLinux-9-GenericCloud-9.5-20241120.x86_64.qcow2
base_image_url: https://repo.almalinux.org/almalinux/9/cloud/x86_64/images/{{ base_image_name }}
base_image_sha: abddf01589d46c841f718cec239392924a03b34c4fe84929af5d543c50e37e37
libvirt_pool_dir: "/var/lib/libvirt/images"
vm_name: f34-dev
vm_vcpus: 2
vm_ram_mb: 2048
vm_net: default
vm_root_pass: test123
cleanup_tmp: no
ssh_key: /root/.ssh/id_rsa.pub
# Added option to configure ip address
ip_addr: 192.168.124.250
gw_addr: 192.168.124.1
# Added option to configure disk size
vm_disksize: 20

Defining a VM template

The community.libvirt.virt module is used to provision a KVM VM. This module uses a VM definition in XML format with libvirt syntax. You can dump a VM definition of a current VM and then convert it to a template from there. Or you can just use this:

cd templates/ && vim vm-template.xml.j2

<domain type='kvm'>
  <name>{{ vm_name }}</name>
  <memory unit='MiB'>{{ vm_ram_mb }}</memory>
  <vcpu placement='static'>{{ vm_vcpus }}</vcpu>
  <os>
    <type arch='x86_64' machine='pc-q35-5.2'>hvm</type>
    <boot dev='hd'/>
  </os>
  <cpu mode='host-model' check='none'/>
  <devices>
    <emulator>/usr/bin/qemu-system-x86_64</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='{{ libvirt_pool_dir }}/{{ vm_name }}.qcow2'/>
      <target dev='vda' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
       <!-- Added: Specify the disk size using a variable -->
      <size unit='GiB'>{{ disk_size }}</size>
    </disk>
    <interface type='network'>
      <source network='{{ vm_net }}'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
    </interface>
    <channel type='unix'>
      <target type='virtio' name='org.qemu.guest_agent.0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <channel type='spicevmc'>
      <target type='virtio' name='com.redhat.spice.0'/>
      <address type='virtio-serial' controller='0' bus='0' port='2'/>
    </channel>
    <input type='tablet' bus='usb'>
      <address type='usb' bus='0' port='1'/>
    </input>
    <input type='mouse' bus='ps2'/>
    <input type='keyboard' bus='ps2'/>
    <graphics type='spice' autoport='yes'>
      <listen type='address'/>
      <image compression='off'/>
    </graphics>
    <video>
      <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
    </video>
    <memballoon model='virtio'>
      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
    </memballoon>
    <rng model='virtio'>
      <backend model='random'>/dev/urandom</backend>
      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
    </rng>
  </devices>
</domain>

The template uses some of the variables from earlier. This allows flexibility to changes things by just changing the variables.

Define tasks for the role to perform

cd ../tasks/ && vim main.yml

---
# tasks file for kvm_provision

# ensure the required package dependencies `guestfs-tools` and `python3-libvirt` are installed. This role requires these packages to connect to `libvirt` and to customize the virtual image in a later step. These package names work on Fedora Linux. If you're using RHEL 8 or CentOS, use `libguestfs-tools` instead of `guestfs-tools`. For other distributions, adjust accordingly.

- name: Ensure requirements in place
  package:
    name:
      - guestfs-tools
      - python3-libvirt
    state: present
  become: yes

# obtain a list of existing VMs so that you don't overwrite an existing VM on accident. uses the `virt` module from the collection `community.libvirt`, which interacts with a running instance of KVM with `libvirt`. It obtains the list of VMs by specifying the parameter `command: list_vms` and saves the results in a variable `existing_vms`. `changed_when: no` for this task to ensure that it's not marked as changed in the playbook results. This task doesn't make any change in the machine; it only checks the existing VMs. This is a good practice when developing Ansible automation to prevent false reports of changes.
- name: Get VMs list
  community.libvirt.virt:
    command: list_vms
  register: existing_vms
  changed_when: no

#execute only when the VM name the user provides doesn't exist. And uses the module `get_url` to download the base cloud image into the `/tmp` directory
- name: Create VM if not exists
  block:
  - name: Download base image
    get_url:
      url: "{{ base_image_url }}"
      dest: "/tmp/{{ base_image_name }}"
      checksum: "sha256:{{ base_image_sha }}"
      
# copy the file to libvirt's pool directory so we don't edit the original, which can be used to provision other VMS later
  - name: Copy base image to libvirt directory
    copy:
      dest: "{{ libvirt_pool_dir }}/{{ vm_name }}.qcow2"
      src: "/tmp/{{ base_image_name }}"
      force: no
      remote_src: yes 
      mode: 0660
    register: copy_results
  - 
# Resize the VM disk
  - name: Resize VM disk
    command: qemu-img resize "{{ libvirt_pool_dir }}/{{ vm_name }}.qcow2" "{{ disk_size }}G"
    when: copy_results is changed

# uses command module to run virt-customize to customize the image
  - name: Configure the image
    command: |
      virt-customize -a {{ libvirt_pool_dir }}/{{ vm_name }}.qcow2 \
      --hostname {{ vm_name }} \
      --root-password password:{{ vm_root_pass }} \
      --ssh-inject 'root:file:{{ ssh_key }}' \
      --uninstall cloud-init --selinux-relabel
# Added option to configure an IP address
      --firstboot-command "nmcli c m eth0 con-name eth0 ip4 {{ ip_addr }}/24 gw4 {{ gw_addr }} ipv4.method manual && nmcli c d eth0 && nmcli c u eth0"

    when: copy_results is changed

  - name: Define vm
    community.libvirt.virt:
      command: define
      xml: "{{ lookup('template', 'vm-template.xml.j2') }}"
    when: "vm_name not in existing_vms.list_vms"

- name: Ensure VM is started
  community.libvirt.virt:
    name: "{{ vm_name }}"
    state: running
  register: vm_start_results
  until: "vm_start_results is success"
  retries: 15
  delay: 2

- name: Ensure temporary file is deleted
  file:
    path: "/tmp/{{ base_image_name }}"
    state: absent
  when: cleanup_tmp | bool

Changed my user to own the libvirt directory: chown -R david:david /var/lib/libvirt/images

Create playbook kvm_provision.yaml

---
- name: Deploys VM based on cloud image
  hosts: localhost
  gather_facts: yes
  become: yes
  vars:
    pool_dir: "/var/lib/libvirt/images"
    vm: control
    vcpus: 2
    ram_mb: 2048
    cleanup: no
    net: default
    ssh_pub_key: "/home/davidt/.ssh/id_ed25519.pub"
    disksize: 20

  tasks:
    - name: KVM Provision role
      include_role:
        name: kvm_provision
      vars:
        libvirt_pool_dir: "{{ pool_dir }}"
        vm_name: "{{ vm }}"
        vm_vcpus: "{{ vcpus }}"
        vm_ram_mb: "{{ ram_mb }}"
        vm_net: "{{ net }}"
        cleanup_tmp: "{{ cleanup }}"
        ssh_key: "{{ ssh_pub_key }}"

Add the libvirt collection ansible-galaxy collection install community.libvirt

Create a VM with a new name ansible-playbook -K kvm_provision.yaml -e vm=ansible1

–run-command ’nmcli c a type Ethernet ifname eth0 con-name eth0 ip4 192.168.124.200 gw4 192.168.124.1'

parted /dev/vda resizepargit t 4 100% Warning: Partition /dev/vda4 is being used. Are you sure you want to continue? Yes/No? y
Information: You may need to update /etc/fstab.

lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS vda 252:0 0 20G 0 disk ├─vda2 252:2 0 200M 0 part /boot/efi ├─vda3 252:3 0 1G 0 part /boot └─vda4 252:4 0 8.8G 0 part /

variables {{ ansible_user }} {{ ansible_password }} {{ gw_addr }} {{ ip_addr }}

; useradd -m -p {{ ansible_user }} ; chage -d 0 {{ ansible_user }} ; cat {{ ansible_password }} > passwd {{ ansible_user }} –stdin" \

  - name: Configure the image
    command: |
      virt-customize -a {{ libvirt_pool_dir }}/{{ vm_name }}.qcow2 \
      --hostname {{ vm_name }} \
      --root-password password:{{ vm_root_pass }} \
      --uninstall cloud-init --selinux-relabel \
      --firstboot-command "nmcli c m eth0 con-name eth0 ip4 \
                           {{ ip_addr }}/24 gw4 {{ gw_addr }} \
                           ipv4.method manual && nmcli c d eth0 \
                           && nmcli c u eth0 && adduser \
                           {{ ansible_user }} && echo \
                           "{{ ansible_password }}" | passwd \
                           --stdin {{ ansible_user }}"
    when: copy_results is changed

  - name: Add ssh keys
    command: |
      virt-customize -a {{ libvirt_pool_dir }}/{{ vm_name }}.qcow2 \
      --ssh-inject '{{ ansible_user }}:file:{{ ssh_key }}'

Common modules with examples

uri: Interacts with basic http and https web services. (Verify connectivity to a web server +9)

Test httpd accessibility:

uri:
  url: http://ansible1

Show result of the command while running the playbook:

uri:
  url: http://ansible1
  return_content: yes

Show the status code that signifies the success of the request:

uri:
  url: http://ansible1
  status_code: 200

debug: Prints statements during execution. Used for debugging variables or expressions without stopping a playbook.

Print out the value of the ansible_facts variable:

debug:
  var: ansible_facts

Deploying files

This chapter covers the following subjects:

• Using Modules to Manipulate Files • Managing SELinux Properties • Using Jinja2 Templates

RHCE exam topics

• Use Ansible modules for system administration tasks that work with: • File contents • Use advanced Ansible features • Create and use templates to create customized configuration files

Using Modules to Manipulate Files

File Module Manipulation Overview

Common modules to manipulate files copy

  • Copies files to remote locations fetch
  • Fetches files from remote locations file
  • Manage file and file properties
  • Create new files or directories
  • Create links
  • Remove files
  • Set permissions and ownership

acl

  • Work with file system ACLs find
  • Find files based on properties lineinfile
  • Manages lines in text files blockinfile
  • Manage blocks in text files replace
  • Replaces strings in text files based on regex synchronize
  • Performs rsync-based synchronization tasks stat
  • Retrieves file or file system status
  • enables you to retrieve file status information.
  • gets status information and is not used to change anything
  • use it to check specific file and perform an action if the properties are not set as expected. Shows:
  • which permission mode is set,
  • whether it is a link,
  • which checksum is set on the file
  • etc.
  • See ansible-doc stat for list of full output

Lab: View information about /etc/hosts file

- name: stat module tests
  hosts: ansible1
  tasks:
  - stat:
      path: /etc/hosts
    register: st
  - name: show current values
    debug:
      msg: current value of the st variable is {{ st }}

Lab: write a message if the expected permission mode is not set.

---
- name: stat module test
  hosts: ansible1
  tasks:
  - command: touch /tmp/statfile
  - stat:
      path: /tmp/statfile
    register: st
  - name: show current values
    debug:
      msg: current value of the st variable is {{ st }}
  - fail:
      msg: "unexpected file mode, should be set to 0640"
    when: st.stat.mode != '0640'                               

Lab: Use the file Module to Correct File Properties Discovered with stat

---
- name: stat module tests
  hosts: ansible1
  tasks:
  - command: touch /tmp/statfile
  - stat:
      path: /tmp/statfile
    register: st
  - name: show current values
    debug:
      msg: current value of the st variable is {{ st }}
  - name: changing file permissions if that's needed
    file:
      path: /tmp/statfile
      mode: 0640
    when: st.stat.mode != '0640'

Managing File Contents

Use lineinfile or blockinfile instead of copy to manage text in a file

Lab: Change a string, based on a regular expression.

---
- name: configuring SSH
  hosts: all
  tasks:
  - name: disable root SSH login
    lineinfile:
      dest: /etc/ssh/sshd_config
      regexp: "^PermitRootLogin"
      line: "PermitRootLogin no"
    notify: restart sshd

  handlers:
  - name: restart sshd
    service:
      name: sshd
      state: restarted

Lab: Manipulate multiple lines

---
- name: modifying file
  hosts: all
  tasks:
  - name: ensure /tmp/hosts exists
    file:
      path: /tmp/hosts
      state: touch
  - name: add some lines to /tmp/hosts
    blockinfile:
      path: /tmp/hosts
      block: |
        192.168.4.110 host1.example.com
        192.168.4.120 host2.example.com
      state: present

When blockinfile is used, the text specified in the block is copied with a start and end indicator.

[ansible@ansible1 ~]$ cat /tmp/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.122.201 ansible1
192.168.122.202 ansible2
192.168.122.203 ansible3
# BEGIN ANSIBLE MANAGED BLOCK
192.168.4.110 host1.example.com
192.168.4.120 host2.example.com
# END ANSIBLE MANAGED BLOCK

Lab: Creating and Removing Files

Use the file module to create a new directory and in that directory create an empty file, then remove the directory recursively.

---
- name: using the file module
  hosts: ansible1
  tasks:
  - name: create directory
    file:
      path: /newdir
      owner: ansible
      group: ansible
      mode: 770
      state: directory
  - name: create file in that directory
    file:
      path: /newdir/newfile
      state: touch
  - name: show the new file
    stat:
      path: /newdir/newfile
    register: result
  - debug:
      msg: |
        This shows that newfile was created
        "{{ result }}"
  - name: removing everything again
    file:
      path: /newdir
      state: absent               
  • state: absent recursively removes the directory.

Moving Files Around

copy module copies a file from the Ansible control host to a managed machine.

fetch module enables you to do the opposite

synchronize module performs Linux rsync-like tasks, ensuring that a file from the control host is synchronized to a file with that name on the managed host.

copy module always creates a new file, whereas the synchronize module updates a current existing file.

Lab: Moving a File Around with Ansible

---
- name: file copy modules
  hosts: all
  tasks:
  - name: copy file demo
    copy:
      src: /etc/hosts
      dest: /tmp/
  - name: add some lines to /tmp/hosts
    blockinfile:
      path: /tmp/hosts
      block: |
        192.168.4.110 host1.example.com
        192.168.4.120 host2.example.com
      state: present
  - name: verify file checksum
    stat:
      path: /tmp/hosts
      checksum_algorithm: md5
    register: result
  - debug:
      msg: "The checksum of /tmp/hosts is {{ result.stat.checksum }}"
  - name: fetch a file
    fetch:
      src: /tmp/hosts
      dest: /tmp/
  • Ansible creates a subdirectory on the control node for each managed host in the dest directory and puts the file that fetch has copied from the remote host in that subdirectory:
/tmp/ansible1/tmp/hosts
/tmp/ansible2/tmp/hosts

Lab: Managing Files with Ansible

1. Create a file with the name exercise81.yaml and give it the following play header: 2. Add a task that creates a new empty file: 3. Use the stat module to check on the status of the new file: 4. To see what the status module is doing, add a line that uses the debug module: 5. Now that you understand which values are stored in newfile, you can add a conditional play that changes the current owner if not set correctly: 6. Add a second play to the playbook that fetches a remote file: 7. Now that you have fetched the file so that it is on the Ansible control machine, use blockinfile to edit it: 8. In the final step, copy the modified file to ansible2 by including the following play: 9. At this point you’re ready to run the playbook. Type ansible-playbook exercise81.yaml to run it and observe the results. 10. Type ansible ansible2 -a "cat /tmp/motd" to verify that the modified motd file was successfully copied to ansible2.

---
- name: testing file manipulation skills
  hosts: ansible1
  tasks:
    - name: create new file
      file:
        name: /tmp/newfile
        state: touch
    - name: check the status of the new file
      stat:
        path: /tmp/newfile
      register: newfile
    - name: for debugging only
      debug:
        msg: the current values for newfile are {{ newfile }}
    - name: change file owner if needed
      file:
        path: /tmp/newfile
        owner: ansible
      when: newfile.stat.pw_name != 'ansible'

- name: fetching a remote file
  hosts: ansible1
  tasks:
    - name: fetch file from remote machine
      fetch:
        src: /etc/motd
        dest: /tmp
        
- name: adding text to the text file that is now on localhost
  hosts: localhost
  tasks:
  - name: add a message
    blockinfile:
      path: /tmp/ansible1/etc/motd
      block: |
        welcome to this server
        for authorized users only
      state: present

- name: copy the modified file to ansible2
  hosts: ansible2
  tasks:
  - name: copy motd file
    copy:
      src: /tmp/ansible1/etc/motd
      dest: /tmp

Discovering storage related facts

Table 15-2 Modules for Managing Storage

Image Image

To make sure that your playbook is applied to the right devices, you first need to find which devices are available on your managed system.

After you find them, you can use conditionals to make sure that tasks are executed on the right devices.

Ansible_facts related to storage

ansible_devices

  • Available storage and device info ansible_device_links
  • info on how to access storage and other device info ansible_mounts
  • Mount point info

ansible ansible1 -m setup -a 'filter=ansible_devices'

  • Find generic information about storage devices.

  • The filter argument to the setup module uses a shell-style wildcard to search for matching items and for that reason can search in the highest level facts, such as ansible_devices, but it is incapable of further specifying what is searched for. For that reason, in the filter argument to the setup module, you cannot use a construction like ansible ansible1 -m setup -a "filter=ansible_devices.sda" which is common when looking up the variable in conditional statements.

Assert module

  • show an error message if a device does not exist and to perform a task if the device exists.
  • For an easier solution, you can also use a when statement to look for the existence of a device.
  • The advantage of using the assert module is that an error message can be printed if the condition is not met.

Listing 15-2 Using assert to Run a Task Only If a Device Exists

    ---
    - name: search for /dev/sdb continue only if it is found
      hosts: all
      vars:
        disk_name: sdb
      tasks:
      - name: abort if second disk does not exist
        assert:
          that:
            - "ansible_facts['devices']['{{ disk_name }}'] is defined"
          fail_msg: second hard disk not found
      - debug:
          msg: "{{ disk_name }} was found, lets continue"

Write a playbook that finds out the name of the disk device and puts that in a variable that you can work with further on in the playbook.

The set_fact argument comes in handy to do so.

You can use it in combination with a when conditional statement to store a detected device name in a variable.

Storing the Detected Disk Device Name in a Variable

    ---
    - name: define variable according to diskname detected
      hosts: all
      tasks:
      - ignore_errors: yes
        set_fact:
          disk2name: sdb
        when: ansible_facts[’devices’][’sdb’]
  - name: Detect secondary disk name
    ignore_errors: yes
    set_fact:
      disk2name: vda
    when: ansible_facts['devices']['vda'] is defined

  - name: Search for second disk, continue only if it is found
    assert:
      that:
        - "ansible_facts['devices'][disk2name] is defined"
      fail_msg: second hard disk not found

  - name: Debug detected disk
    debug:
      msg: "{{ disk2name }} was found. Moving forward."
~                                                      

Next, see Managing Partitions and LVM

Dynamic Inventory

Dynamic inventory scripts

A script is used to detect inventory hosts so that you do not have to manually enter them. This is good for larger environments. You can find community provided dynamic inventory scripts that come with an .ini file that provides information on how to connect to a resource.

Inventory scripts must include –list and –host options and output must be JSON formatted. Here is an example from sandervanvught that generates an inventory script using /etc/hosts:

[ansible@control base]$ cat inventory-helper.py

#!/usr/bin/python

from subprocess import Popen,PIPE
import sys

try:
     import json
except ImportError:
     import simplejson as json



result = {}

result['all'] = {}



pipe = Popen(['getent', 'hosts'], stdout=PIPE, universal_newlines=True)


result['all']['hosts'] = []

for line in pipe.stdout.readlines():
    s = line.split()
    result['all']['hosts']=result['all']['hosts']+s


result['all']['vars'] = {}


if len(sys.argv) == 2 and sys.argv[1] == '--list':
    print(json.dumps(result))

elif len(sys.argv) == 3 and sys.argv[1] == '--host':
    print(json.dumps({}))

else:
    print("Requires an argument, please use --list or --host <host>")

When ran on our sample lab:

[ansible@control base]$sudo python3 ./inventory-helper.py
Requires an argument, please use --list or --host <host>

[ansible@control base]$ sudo python3 ./inventory-helper.py --list
{"all": {"hosts": ["127.0.0.1", "localhost", "localhost.localdomain", "localhost4", "localhost4.localdomain4", "127.0.0.1", "localhost", "localhost.localdomain", "localhost6", "localhost6.localdomain6", "192.168.124.201", "ansible1", "192.168.124.202", "ansible2"], "vars": {}}}

To use a dynamic inventory script:

[ansible@control base]$ chmod u+x inventory-helper.py 
[ansible@control base]$ sudo ansible -i inventory-helper.py all --list-hosts
[WARNING]: A duplicate localhost-like entry was found (localhost). First found localhost was 127.0.0.1
  hosts (11):
    127.0.0.1
    localhost
    localhost.localdomain
    localhost4
    localhost4.localdomain4
    localhost6
    localhost6.localdomain6
    192.168.124.201
    ansible1
    192.168.124.202
    ansible2

Configuring Dynamic Inventory

dynamic inventory

  • script that can be used to detect whether new hosts have been added to the managed environment.

  • Dynamic inventory scripts are provided by the community and exist for many different environments.

  • easy to write your own dynamic inventory script.

  • The main requirement is that the dynamic inventory script works with a --list and a --host <hostname> option and produces its output in JSON format.

  • Script must have the Linux execute permission set.

  • Many dynamic inventory scripts are written in Python, but this is not a requirement.

  • Writing dynamic inventory scripts is not an exam requirement

#!/usr/bin/python
    
from subprocess import Popen,PIPE
import sys
    
try:
     import json
except ImportError:
     import simplejson as json
    
result = {}
result['all'] = {}
    
pipe = Popen(['getent', 'hosts'], stdout=PIPE, universal_newlines=True)
    
result['all']['hosts'] = []
    
for line in pipe.stdout.readlines():
    s = line.split()
    result['all']['hosts']=result['all']['hosts']+s
    
result['all']['vars'] = {}
    
if len(sys.argv) == 2 and sys.argv[1] == '--list':
    print(json.dumps(result))
    
elif len(sys.argv) == 3 and sys.argv[1] == '--host':
    print(json.dumps({}))
    
else:
    print("Requires an argument, please use --list or --host <host>")

pipe = Popen(\['getent', 'hosts'\], stdout=PIPE, universal_newline=True)

  • gets a list of hosts using the getent function.
  • This queries all hosts in /etc/hosts and other mechanisms where host name resolving is enabled.
  • To show the resulting host list, you can use the \--list command
  • To show details for a specific host, you can use the option \--host hostname.
    [ansible@control rhce8-book]$ ./listing101.py --list
    {"all": {"hosts": ["127.0.0.1", "localhost", "localhost.localdomain", "localhost4", "localhost4.localdomain4", "127.0.0.1", "localhost", "localhost.localdomain", "localhost6", "localhost6.localdomain6", "192.168.4.200", "control.example.com", "control", "192.168.4.201", "ansible1.example.com", "ansible1", "192.168.4.202", "ansible2.example.com", "ansible2"], "vars": {}}}
  • Dynamic inventory scripts are activated in the same way as regular inventory scripts: you use the -i option to either the ansible or the ansible-playbook command to pass the name of the inventory script as an argument.

External directory service can be based on a wide range of solutions:

  • FreeIPA

  • Active Directory

  • Red Hat Satellite

  • etc.

  • Also are available for virtual machine-based infrastructures such as VMware of Red Hat Enterprise Virtualization, where virtual machines can be discovered dynamically.

  • Can be found in cloud environments, where scripts are available for many solutions, including AWS, GCE, Azure, and OpenStack.

When you are working with dynamic inventory, additional parameters are normally required:

  • To get an inventory from an EC2 cloud environment, you need to enter your web keys.
  • To pass these parameters, many inventory scripts come with an additional configuration file that is formatted in .ini style.
  • The community-provided ec2.py script, for instance, comes with an ec2.ini parameter file.

Another feature that is seen in many inventory scripts is cache management:

  • Can use a cache to store names and parameters of recently discovered hosts.
  • If a cache is provided, options exist to manage the cache, allowing you, for instance, to make sure that the inventory information really is recently discovered.

Encrypted passwords

Managing Encrypted Passwords

When managing users in Ansible, you probably want to set user passwords as well. The challenge is that you cannot just enter a password as the value to the password: argument in the user module because the user module expects you to use an encrypted string.

Understanding Encrypted Passwords

When a user creates a password, it is encrypted. The hash of the encrypted password is stored in the /etc/shadow file, a file that is strictly secured and accessible only with root privileges. The string looks like $6$237687687/$9809erhb8oyw48oih290u09. In this string are three elements, which are separated by $ signs:

• The hashing algorithm that was used

• The random salt that was used to encrypt the password

• The encrypted hash of the user password

When a user sets a password, a random salt is used to prevent two users who have identical passwords from having identical entries in /etc/shadow. The salt and the unencrypted password are combined and encrypted, which generates the encrypted hash that is stored in /etc/shadow. Based on this string, the password that the user enters can be verified against the password field in /etc/shadow, and if it matches, the user is authenticated.

Generating Encrypted Passwords

When you’re creating users with the Ansible user module, there is a password option. This option is not capable of generating an encrypted password. It expects an encrypted password string as its input. That means an external utility must be used to generate an encrypted string. This encrypted string must be stored in a variable to create the password. Because the variable is basically the user password, the variable should be stored securely in, for example, an Ansible Vault secured file.

To generate the encrypted variable, you can choose to create the variable before creating the user account. Alternatively, you can run the command to create the variable in the playbook, use register to write the result to a variable, and use that to create the encrypted user. If you want to generate the variable beforehand, you can use the following ad hoc command:

ansible localhost -m debug -a "msg={{ ‘password’ | password_hash(‘sha512’,’myrandomsalt’) }}"

This command generates the encrypted string as shown in Listing 13-11, and this string can next be used in a playbook. An example of such a playbook is shown in Listing 13-12.

Listing 13-11 Generating the Encrypted Password String

::: pre_1 [ansible@control ~]$ ansible localhost -m debug -a “msg={{ ‘password’ | password_hash(‘sha512’,’myrandomsalt’) }}” localhost | SUCCESS => { “msg”: “$6$myrandomsalt$McEB.xAVUWe0./6XqZ8n/7k9VV/Gxndy9nIMLyQAiPnhyBoToMWbxX2vA4f.Uv9PKnPRaYUUc76AjLWVAX6U10” } :::

Listing 13-12 Sample Playbook That Creates an Encrypted User Password

    ---
    - name: create user with encrypted pass
      hosts: ansible2.example.com
      vars:
        password: "$6$myrandomsalt$McEB.xAVUWe0./6XqZ8n/7k9VV/Gxndy9nIMLyQAiPnhyBoToMWbxX2vA4f.Uv9PKnPRaYUUc76AjLWVAX6U10"
      tasks:
      - name: create the user
        user:
          name: anna
          password: "{{ password }}"

The method that is used here works but is not elegant. First, you need to generate the encrypted password manually beforehand. Also, the encrypted password string is used in a readable way in the playbook. By seeing the encrypted password and salt, it’s possible to get to the original password, which is why the password should not be visible in the playbook in a secure environment.

In Exercise 13-3 you create a playbook that prompts for the user password and that uses the debug module, which was used in Listing 13-11 inside the playbook, together with register, so that the password no longer is readable in clear text. Before looking at Exercise 13-3, though, let’s first look at an alternative approach that also works.

The procedure to use encrypted passwords while creating user accounts is documented in the Frequently Asked Questions from the Ansible documentation. Because the documentation is available on the exam, make sure you know where to find this information! Search for the item “How do I generate encrypted passwords for the user module?”

Using an Alternative Approach

As has been mentioned on multiple occasions, in Ansible often different solutions exist for the same problem. And sometimes, apart from the most elegant solution, there’s also a quick-and-dirty solution, and that counts for setting a user-encrypted password as well. Instead of using the solution described in the previous section, “Generating Encrypted Passwords,” you can use the Linux command echo password | passwd --stdin to set the user password. Listing 13-13 shows how to do this. Notice this example focuses on how to do it, not on security. If you want to make the playbook more secure, it would be nice to store the password in Ansible Vault.

Listing 13-13 Setting the User Password: Alternative Solution

    ---
    - name: create user with encrypted password
      hosts: ansible3
      vars:
        password: mypassword
        user: anna
      tasks:
      - name: configure user {{ user }}
        user:
          name: "{{ user }}"
          groups: wheel
          append: yes
          state: present
      - name: set a password for {{ user }}
        shell: ‘echo {{ password }} | passwd --stdin {{ user }}’

::: box Exercise 13-3 Creating Users with Encrypted Passwords

1. Use your editor to create the file exercise133.yaml.

2. Write the play header as follows:

---
- name: create user with encrypted password
  hosts: ansible3
  vars_prompt:
  - name: passw
    prompt: which password do you want to use
  vars:
    user: sharon
  tasks:

3. Add the first task that uses the debug module to generate the encrypted password string and register to store the string in the variable mypass:

- debug:
    msg: "{{ ‘{{ passw }}’| password_hash(‘sha512’,’myrandomsalt’) }}"
  register: mypass

4. Add a debug module to analyze the exact format of the registered variable:

- debug:
    var: mypass

5. Use ansible-playbook exercise133.yaml to run the playbook the first time so that you can see the exact name of the variable that you have to use. This code shows that the mypass.msg variable contains the encrypted password string (see Listing 13-14).

Listing 13-14 Finding the Variable Name Using debug

::: pre_1

TASK [debug] *******************************************************************
ok: [ansible2] => {
    "mypass": {
        "changed": false,
        "failed": false,
        "msg": "$6$myrandomsalt$Jesm4QGoCGAny9ebP85apmh0/uUXrj0louYb03leLoOWSDy/imjVGmcODhrpIJZt0rz.GBp9pZYpfm0SU2/PO."
    }
}

:::

6. Based on the output that you saw with the previous command, you can now use the user module to refer to the password in the right way. Add the following task to do so:

- name: create the user
  user:
    name: "{{ user }}"
    password: "{{ mypass.msg }}"

7. Use ansible-playbook exercise133.yaml to run the playbook and verify its output. :::

Execution Environments

Why use EEs?

  • Portable Ansible environments
    • includes Ansible core version
    • All desired collections
    • Python dependencies
    • Bindep dependencies
    • Anything you need to run a playbook

A container that has a specific version of Ansible. Can test execution in a specific Ansible environment to make sure it will work with that version.

EEs are built leveraging ansible-bulder They can be pushed to a private automation hub or any container registry Run EEs from the cli using ansible-navigator Or run in your production environment using automation controller as part of the Ansible Automation Platform If you want them to automatically occur, schedule them as a job inside AAP

Handlers

Using Handlers

  • A task that is triggered and is executed by a successful task.

Working with Handlers

  • Define a notify statement at the level where the task is defined.
  • The notify statement should list the name of the handler that is to be executed
  • Handlers are listed at the end of the play.
  • Make sure the name of the handler matches the name of the item that is called in the notify statement, because that is what the handler is looking for.
  • Handlers can be specified as a list, so one task can call multiple handlers.

Lab

  • Define the file index.html on localhost. Use this file in the second play to set up the web server.
  • The handler is triggered from the task where the copy module is used to copy the index.html file.
  • If this task is successful, the notify statement calls the handler.
  • A second task is defined, which is intended to fail.
    ---
    - name: create file on localhost
      hosts: localhost
      tasks:
      - name: create index.html on localhost
        copy:
          content: "welcome to the webserver"
          dest: /tmp/index.html
    
    - name: set up web server
      hosts: all
      tasks:
        - name: install httpd
          yum:
            name: httpd
            state: latest
        - name: copy index.html
          copy:
            src: /tmp/index.html
            dest: /var/www/html/index.html
          notify:
            - restart_web
        - name: copy nothing - intended to fail
          copy:
            src: /tmp/nothing
            dest: /var/www/html/nothing.html
      handlers:
        - name: restart_web
          service:
            name: httpd
            state: restarted
  • All tasks up to copy index.html run successfully. However, the task copy nothing fails, which is why the handler does not run. The solution seems easy: the handler doesn’t run because the task that copies the file /tmp/nothing fails as the source file doesn’t exist.

  • Create the source file using touch /tmp/nothing on the control host and run the task again.

  • After creating the source file and running the playbook again, the handler still doesn’t run.

  • Handlers run only if the task that triggers them gives a changed status.

Run an ad hoc command to remove the /var/www/html/index.html file on the managed hosts and run the playbook again: ansible ansible2 -m file -a "name=/var/www/html/index.html state=absent"

Run the playbook again and you’ll see the handler runs.

Understanding Handler Execution and Exceptions

When a task fails, none of the following tasks run. How does that make handlers different? A handler runs only on the success of a task, but the next task in the list also runs only if the previous task was successful. What, then, is so special about handlers?

The difference is in the nature of the handler.

  • Handlers are meant to perform an extra action when a task makes a change to a host.
  • Handler should be considered an extension to the regular task.
  • A conditional task that runs only upon the success of a previous task.

Two methods to get Handlers to run even if a subsequent task fails:

force_handlers: true (More specific and preferred)

  • Used in the play header to ensure that the handler will run even if a task fails.

ignore_errors: true

  • Used in the play header to accomplish the same thing.

• Handlers are specified in a handlers section at the end of the play. • Handlers run in the order they occur in the handlers section and not in the order as triggered. • Handlers run only if the task calling them generates a changed status. • Handlers by default will not run if any task in the same play fails, unless force_handlers or ignore_errors are used. • Handlers run only after all tasks in the play where the handler is activated have been processed. You might want to define multiple plays to avoid this behavior.

Lab: Working with Handlers

1. Open a playbook with the name exercise73.yaml.

2. Define the play header:

---
- name: update the kernel
  hosts: all
  force_handlers: true
  tasks:

3. Add a task that updates the current kernel:

---
- name: update the kernel
  hosts: all
  force_handlers: true
  tasks:
  - name: update kernel
    yum:
      name: kernel
      state: latest
    notify: reboot_server

4. Add a handler that reboots the server in case the kernel was successfully updated:

---
- name: update the kernel
  hosts: all
  force_handlers: true
  tasks:
  - name: update kernel
    yum:
      name: kernel
      state: latest
    notify: reboot_server
  handlers:
  - name: reboot_server
    command: reboot

5. Run the playbook using ansible-playbook exercise73.yaml andobserve its result. Notice that the handler runs only if the kernel was updated. If the kernel already was at the latest version, nothing has changed and the handler does not run. Also notice that it wasn’t really necessary to use force_handlers in the play header, but by using it anyway, at least you now know where to use it.

Dealing with Failures

Understanding Task Execution

  • Tasks in Ansible playbooks are executed in the order they are specified.
  • If a task in the playbook fails to execute on a host, the task generates an error and the play does not further execute on that specific host.
  • This also goes for handlers: if any task that follows the task that triggers a handler fails, the handlers do not run.
  • In both of these cases, it is important to know that the tasks that have run successfully still generate their result. Because this can give an unexpected result, it is important to always restore the original situation if that happens.

any_errors_fatal

  • Used in the play header or on a block.
  • Stop executing on all hosts when a failing task is encountered

Managing Task Errors

Generically, tasks can generate three different types of results. ok

  • The tasks has run successfully but no changes were applied changed
  • The task has run successfully and changes have been applied failed
  • While running the task, a failure condition was encountered

ignore_errors: yes

  • Keep running the playbook even if a task fails

force_handlers. If

  • can be used to ensure that handlers will be executed, even if a failing task was encountered.

Lab: ignore_errors

    ---
    - name: restart sshd only if crond is running
      hosts: all
      tasks:
        - name: get the crond server status
          command: /usr/bin/systemctl is-active crond
          ignore_errors: yes
          register: result
        - name: restart sshd based on crond status
          service:
            name: sshd
            state: restarted
          when: result.rc == 0

Lab: Forcing Handlers to Run

    ---
    - name: create file on localhost
      hosts: localhost
      tasks:
      - name: create index.html on localhost
        copy:
          content: "welcome to the webserver"
          dest: /tmp/index.html
    
    - name: set up web server
      hosts: all
      force_handlers: yes
      tasks:
        - name: install httpd
          yum:
            name: httpd
            state: latest
        - name: copy index.html
          copy:
            src: /tmp/index.html
            dest: /var/www/html/index.html
          notify:
            - restart_web
        - name: copy nothing - intended to fail
          copy:
            src: /tmp/nothing
            dest: /var/www/html/nothing.html
      handlers:
        - name: restart_web
          service:
            name: httpd
            state: restarted

Specifying Task Failure Conditions

failed_when

  • conditional used to evaluate some expression.
  • Set a failure condition on a task

Lab: failed_when

    ---
    - name: demonstrating failed_when
      hosts: all
      tasks:
      - name: run a script
        command: echo hello world
        ignore_errors: yes
        register: command_result
        failed_when: "’world’ in command_result.stdout"
      - name: see if we get here
        debug:
          msg: second task executed

fail module

  • specify when a task fails.
  • Using this module makes sense only if when is used to define the exact condition when a failure should occur.

Lab: Using the fail Module

    ---
    - name: demonstrating the fail module
      hosts: all
      ignore_errors: yes
      tasks:
      - name: run a script
        command: echo hello world
        register: command_result
      - name: report a failure
        fail:
          msg: the command has failed
        when: "’world’ in command_result.stdout"
      - name: see if we get here
        debug:
          msg: second task executed
  • The ignore_errors statement has movedfrom the task definition to the play header.
  • Without this move, the message “second task executed” would never be shown because the fail module always generates a failure message.
  • The main advantage of using the fail module instead of using failed_when is that the fail module can easily be used to set a clear failure message, which is not possible when using failed_when.

Managing Changed Status

In Ansible, there are commands that change something and commands that don’t. Some commands, however, are not very obvious in reporting their status.

Lab: Change status

    ---
    - name: demonstrate changed status
      hosts: all
      tasks:
      - name: check local time
        command: date
        register: command_result
    
      - name: print local time
        debug:
          var: command_result.stdout
  • Reports a changed status, even if nothing really was changed!

  • Managing the changed status can be useful in avoiding unexpected results while running a playbook.

changed_when

  • If you set changed_when to false, the playbook reports only an ok or failed status and never reports a changed status.

Lab: Using changed_when

---
- name: demonstrate changed status
  hosts: all
  tasks:
  - name: check local time
    command: date
    register: command_result
    changed_when: false
    
  - name: print local time
    debug:
      var: command_result.stdout

Using Blocks

  • Useful when working with conditional statements.
  • A group of tasks to which a when statement can be applied.
  • As a result, if a single condition is true, multiple tasks can be executed.
  • To do so, between the tasks: statement in the play header and the actual tasks that run the specific modules, you can insert a block: statement.

Lab: Using Blocks

---
- name: simple block example
  hosts: all
  tasks:
  - name: setting up http
    block:
    - name: installing http
      yum:
        name: httpd
        state: present
    - name: restart httpod
      service:
        name: httpd
        state: started
    when: ansible_distribution == "CentOS"
  • The when statement is applied at the same level as the block definition.
  • When you define it this way, the tasks in the block are executed only if the when statement is true.

Using Blocks with rescue and always Statements

  • Blocks can be used for simple error handling as well, in such a way that if any task that is defined in the block statement fails, the tasks that are defined in the rescue section are executed.
  • Besides that, an always section can be used to define tasks that should always run, regardless of the success or failure of the tasks in the block.

Lab: Using Blocks, rescue, and always

- name: using blocks
  hosts: all
  tasks: 
  - name: intended to be successful
    block:
    - name: remove a file
      shell:
        cmd: rm /var/www/html/index.html
    - name: printing status
      debug:
        msg: block task was operated
    rescue:
    - name: create a file
      shell:
        cmd: touch /tmp/rescuefile
    - name: printing rescue status
      debug:
        msg: rescue task was operated
    always:
    - name: always write a message to logs
      shell:
        cmd: logger hello
    - name: always printing this message
      debug:
        msg: this message is always printed
  • Run this twice to see the rescue. (The file is already created so a task in the block fails)

command_warnings=False

  • Setting in ansible.cfg to avoid seeing command module warning message.

  • you cannot use a block on a loop.

  • If you need to iterate over a list of values, think of using a different solution.

Labs 4

Host Name Patterns

Working with host name patterns

Working with Host Name Patterns

  • If you want to use an IP address in a playbook, the IP address must be specified as such in the inventory.

  • You cannot use IP addresses that are based only on DNS name resolving.

  • So specifying an IP address in the playbook but not in the inventory file—assuming DNS name resolution is going to take care of the IP address resolving—doesn’t work.

  • apart from the specified groups, there are the implicit host groups all and ungrouped.

  • host name wildcards may be used.

    • ansible -m ping 'ansible\*'
      • match all hosts that have a name starting with ansible.
      • Must put the pattern between single quotes or it will fail with a no matching hosts error.
    • Can be used at any place in the host name.
      • ansible -m ping '\*ble1'
  • When you use wildcards to match host names, Ansible doesn’t distinguish between IP addresses, host names, or hosts; it just matches anything.

    • 'web\*'
      • Matches all servers that are members of the group ‘webservers’, but also hosts ‘web1’ and ‘web2’.

To address multiple hosts:

  • You specify a comma-separated list of targets to address multiple hosts:
    • ansible -m ping ansible1,192.168.4.202
    • Can be a mix of host names, IP addresses, and host group names.

Operators:

  • Can specify a logical AND condition by including an ampersand (&), and a logical NOT by using an exclamation point (!).
    • web,&file applies to hosts only if they are members of the web and file groups
    • web,!webserver1 applies to all hosts in the web group, except host webserver1.
    • When you use the logical AND operator, the position of the ampersand doesn’t matter.
      • web,&file as &web,file also.
    • You can use a colon (:) instead of a comma (,), but using a comma is better to avoid confusion when using IPv6 addresses.

Including and importing Files

When content is included, it is dynamically processed at the moment that Ansible reaches that content.

  • If content is imported, Ansible performs the import operation before starting to work on the tasks in the playbook.

Files can be included and imported at different levels:

Roles: Roles are typically used to process a complete set of instructions provided by the role. Roles have a specific structure as well.

Playbooks: Playbooks can be imported as a complete playbook. You cannot do this from within a play. Playbooks can be imported only at the top level of the playbook.

Tasks: A task file is just a list of tasks and can be imported or included in another task.

Variables: As discussed in Chapter 6, “Working with Variables and Facts,” variables can be maintained in external files and included in a playbook. This makes managing generic multipurpose variables easier.

Importing Playbooks

Importing playbooks is common in a setup where one master playbook is used, from which different additional playbooks are included. According to the Ansible Best Practices Guide (which is a part of the Ansible documentation), the master playbook could have the name site.yaml, and it can be used to include playbooks for each specific set of servers, for instance. When a playbook is imported, this replaces the entire play. So, you cannot import a playbook at a task level; it needs to happen at a play level. Listing 10-4 gives an example of the playbook imported in Listing 10-5. In Listing 10-6, you can see the result of running the ansible-playbook listing105.yaml command.

Listing 10-4 Sample Playbook to Be Imported

::: pre_1 - hosts: all tasks: - debug: msg: running the imported play :::

Listing 10-5 Importing a Playbook

::: pre_1 — - name: run a task hosts: all tasks: - debug: msg: running task1

- name: importing a playbook
  import_playbook: listing104.yaml

:::

Listing 10-6 Running ansible-playbook listing105.yaml Result

::: pre_1 [ansible@control rhce8-book]$ ansible-playbook listing105.yaml

PLAY [run a task] **************************************************************

TASK [Gathering Facts] *********************************************************
ok: [ansible2]
ok: [ansible1]
ok: [ansible3]
ok: [ansible4]

TASK [debug] *******************************************************************
ok: [ansible1] => {
    "msg": "running task1"
}
ok: [ansible2] => {
    "msg": "running task1"
}
ok: [ansible3] => {
    "msg": "running task1"
}
ok: [ansible4] => {
    "msg": "running task1"
}

PLAY [all] *********************************************************************

TASK [Gathering Facts] *********************************************************
ok: [ansible2]
ok: [ansible1]
ok: [ansible3]
ok: [ansible4]

TASK [debug] *******************************************************************
ok: [ansible1] => {
    "msg": "running the imported play"
}
ok: [ansible2] => {
    "msg": "running the imported play"
}
ok: [ansible3] => {
    "msg": "running the imported play"
}
ok: [ansible4] => {
    "msg": "running the imported play"
}

PLAY RECAP *********************************************************************
ansible1                   : ok=4    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
ansible2                   : ok=4    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
ansible3                   : ok=4    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
ansible4                   : ok=4    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

:::

Importing and Including Task Files

import_tasks

  • tasks are statically imported while executing the playbook.

include_tasks

  • tasks are dynamically included at the moment they are needed.
  • Dynamically including task files is recommended when the task file is used in a conditional statement.
  • If task files are mainly used to make development easier by working with separate task files, they can be statically imported.

There are a few considerations when working with import_tasks to statically import tasks:

• Loops cannot be used with import_tasks.

• If a variable is used to specify the name of the file to import, this cannot be a host or group inventory variable.

• When you use a when statement on the entire import_tasks file, the conditional statements are applied to each task that is involved.

As an alternative, include_tasks can be used to dynamically include a task file. This approach also comes with some considerations:

• When you use the ansible-playbook --list-tasks command, tasks that are in the included tasks are not displayed.

• You cannot use ansible-playbook --start-at-task to start a playbook on a task that comes from an included task file.

• You cannot use a notify statement in the main playbook to trigger a handler that is in the included tasks file.

::: note


Tip

When you use includes and imports to work with task files, the recommendation is to store the task files in a separate directory. Doing so makes it easier to delegate task management to specific users.


:::

Using Variables When Importing and Including Files

The main goal to work with imported and included files is to make working with reusable code easy. To make sure you reach this goal, the imported and included files should be as generic as possible. That means it’s a bad idea to include names of specific items that may change when used in a different context. Think, for instance, of the names of packages, users, services, and more.

To deal with include files in a flexible way, you should define specific items as variables. Within the include_tasks file, for instance, you refer to {{ package }}, and in the main playbook from which the include files are called, you can define the variables. Obviously, you can use this approach with a straight variable definition or by using host variable or group variable include files.

::: note


Exam tip

It’s always possible to configure items in a way that is brilliant but quite complex. On the exam it’s not a smart idea to go for complex. Just keep your solution as easy as possible. The only requirement on the exam is to get things working, and it doesn’t matter exactly how you do that.


:::

In Listings 10-7 through 10-10, you can see how include and import files are used to work on one project. The main playbook, shown in Listing 10-9, defines the variables to be used, as well as the names of the include and import files. Listings 10-7 and 10-8 show the code from the include files, which use the variables that are defined in Listing 10-9. The result of running the playbook in Listing 10-9 can be seen in Listing 10-10.

Listing 10-7 The Include Tasks File tasks/service.yaml Used for Services Definition

::: pre_1 - name: install {{ package }} yum: name: “{{ package }}” state: latest - name: start {{ service }} service: name: “{{ service }}” enabled: true state: started :::

The sample tasks file in Listing 10-7 is straightforward; it uses the yum module to install a package and the service module to start and enable the package. The variables this file refers to are defined in the main playbook in Listing 10-9.

Listing 10-8 The Import Tasks File tasks/firewall.yaml Used for Firewall Definition

::: pre_1 - name: install the firewall package: name: “{{ firewall_package }}” state: latest - name: start the firewall service: name: “{{ firewall_service }}” enabled: true state: started - name: open the port for the service firewalld: service: “{{ item }}” immediate: true permanent: true state: enabled loop: “{{ firewall_rules }}” :::

In the sample firewall file in Listing 10-8, the firewall service is installed, defined, and configured. In the configuration of the firewalld service, a loop is used on the variable firewall_rules. This variable obviously is defined in Listing 10-9, which is the file where site-specific contents such as variables are defined.

Listing 10-9 Main Playbook Example

::: pre_1 — - name: setup a service hosts: ansible2 tasks: - name: include the services task file include_tasks: tasks/service.yaml vars: package: httpd service: httpd when: ansible_facts[’os_family’] == ’RedHat’ - name: import the firewall file import_tasks: tasks/firewall.yaml vars: firewall_package: firewalld firewall_service: firewalld firewall_rules: - http - https :::

The main playbook in Listing 10-9 shows the site-specific configuration. It performs two main tasks: it defines variables, and it calls an include file and an import file. The variables that are defined are used by the include and import files. The include_tasks statement is executed in a when statement. Notice that the firewall_rules variable contains a list as its value, which is used by the loop that is defined in the import file.

Listing 10-10 Running ansible-playbook listing109.yaml

::: pre_1 [ansible@control rhce8-book]$ ansible-playbook listing109.yaml

PLAY [setup a service] *********************************************************

TASK [Gathering Facts] *********************************************************
ok: [ansible2]

TASK [include the services task file] ******************************************
included: /home/ansible/rhce8-book/tasks/service.yaml for ansible2

TASK [install httpd] ***********************************************************
ok: [ansible2]

TASK [start httpd] *************************************************************
changed: [ansible2]

TASK [install the firewall] ****************************************************
changed: [ansible2]

TASK [start the firewall] ******************************************************
ok: [ansible2]

TASK [open the port for the service] *******************************************
changed: [ansible2] => (item=http)
changed: [ansible2] => (item=https)

PLAY RECAP *********************************************************************
ansible2                   : ok=7    changed=3    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

:::

The interesting thing in the Listing 10-10 output is that the include file is dynamically included while running the playbook. This is not the case for the statically imported file. In Exercise 10-3 you practice working with include files.

::: box Exercise 10-3 Using Includes and Imports

In this exercise you create a simple master playbook that installs a service. The name of the service is defined in a variable file, and the specific tasks are included through task files.

1. Open the file exercise103-vars.yaml and define three variables as follows:

packagename: vsftpd
servicename: vsftpd
firewalld_servicename: ftp

2. Create the exercise103-ftp.yaml file and give it the following contents to install, enable, and start the vsftpd service and also to make it accessible in the firewall:

- name: install {{ packagename }}
  yum:
    name: "{{ packagename }}"
    state: latest
- name: enable and start {{ servicename }}
  service:
    name: "{{ servicename }}"
    state: started
    enabled: true
- name: open the service in the firewall
  firewalld:
    service: "{{ firewalld_servicename }}"
    permanent: yes
    state: enabled

3. Create the exercise103-copy.yaml file that manages the /var/ftp/pub/README file and make sure it has the following contents:

- name: copy a file
  copy:
    content: "welcome to this server"
    dest: /var/ftp/pub/README

4. Create the master playbook exercise103.yaml that includes all of them and give it the following contents:

---
- name: install vsftpd on ansible2
  vars_files: exercise103-vars.yaml
  hosts: ansible2
  tasks:
  - name: install and enable vsftpd
    import_tasks: exercise103-ftp.yaml
  - name: copy the README file
    import_tasks: exercise103-copy.yaml

5. Run the playbook and verify its output

6. Run an ad hoc command to verify the /var/ftp/pub/README file has been created: ansible ansible2 -a “cat /var/ftp/pub/README”.

End-of-Chapter Lab

In the end-of-chapter lab with this chapter, you reorganize a playbook to work with several different files instead of one big file. Do this according to the instructions in Lab 10-1.

Lab 10-1

The lab82.yaml file, which you can find in the GitHub repository that goes with this course, is an optimal candidate for optimization. Optimize this playbook according to the following requirements:

• Use includes and import to make this a modular playbook where different files are used to distinguish between the different tasks.

• Optimize this playbook such that it will run on no more than two hosts at the same time and completes the entire playbook on these two hosts before continuing with the next host.

Jinja2 templates

Using Jinja2 Templates

  • A template is a configuration file that contains variables and, based on the variables, is generated on the managed hosts according to host-specific requirements.
  • Using templates allows for a structural way to generate configuration files, which is much more powerful than changing specific lines from specific files.
  • Ansible uses Jinja2 to generate templates.
  • Jinja2 is a generic templating language for Python developers.
  • It is used in Ansible templates, but Jinja2-based approaches are also found in other parts of Ansible. For instance, the way variables are referred to is based on Jinja2.

In a Jinja2 template, three elements can be used. data

  • sample text comment
  • {# sample text #} variable
  • {{ ansible_facts['default_ipv4']['address'] }} expression
{% for myhost in groups['web'] %}
{{ myhost }}
{% endfor %}
  • To work with a template, you must create a template file, written in Jinja2.
  • Template file must be included in an Ansible playbook that uses the template module.

Sample Template:

# {{ ansible_managed }}

<VirtualHost *:80>
        ServerAdmin webmaster@{{ ansible_facts['fqdn'] }}
        ServerName {{ ansible_facts['fqdn'] }}
        ErrorLog logs/{{ ansible_facts['hostname'] }}-error.log
        CustomLog       logs/{{ ansible_facts['hostname'] }}-common.log common
        DocumentRoot /var/www/vhosts/{{ ansible_facts['hostname'] }}/

        <Directory /var/www/vhosts/{{ ansible_facts['hostname'] }}>
                Options +Indexes +FollowSymlinks +Includes
                Require all granted
        </Directory>
</VirtualHost>
  • starts with # {{ ansible_managed }}.

  • This string is commonly used to identify that a file is managed by Ansible so that administrators are not going to change file contents by accident.

  • While processing the template, this string is replaced with the value of the ansible_managed variable.

  • This variable can be set in ansible.cfg.

  • For instance, you can use ansible_managed = This file is managed by Ansible to substitute the variable with its value while generating the template.

  • template file is just a text file that uses variables to substitute specific variables to their values.

Calling a template:

    ---
    - name: installing a template file
      hosts: ansible1
      tasks:
      - name: install httpd
        yum:
          name: httpd
          state: latest
      - name: start and enable httpd
        service:
          name: httpd
          state: started
          enabled: true
      - name: install vhost config file
        template:
          src: listing813.j2
          dest: /etc/httpd/conf.d/vhost.conf
          owner: root
          group: root
          mode: 0644
      - name: restart httpd
        service:
          name: httpd
          state: restarted

Applying Control Structures in Jinja2 Using for

  • Control structures can be used to dynamically generate contents.
  • A for statement can be used to iterate over all elements that exist as the value of a variable.
{% for node in groups['all'] %}
host_port={{ node }}:8080
{% endfor %}
  • variable with the name host_ports is defined on the second line (which is the line that will be written to the target file).
  • To produce its value, the host group all is processed in the for statement on the first line.
  • While processing the host group, a temporary variable with the name node is defined.
  • This value of the node variable is replaced with the name of the host while it is processed, and after the host name, the string :8080 is copied, which will result in a separate line for each host that was found.
  • As the last element, {% endfor %} is used to close the for loop.

LAB: Generating a Template with a Conditional Statement

    ---
    - name: generate host list
      hosts: ansible2
      tasks:
      - name: template loop
        template:
          src: listing815.j2
          dest: /tmp/hostports.txt

To verify, you can use the ad hoc command ansible ansible2 -a "cat /tmp/hostports.txt"

Using Conditional Statements with if

  • The for statement can be used in templates to iterate over a series of values.
  • The if statement can be used to include text only if a variable contains a specific value or evaluates to a Boolean true.

Template Example with if if.j2

    {% if apache_package == 'apache2' %}
      Welcome to Apache2
    {% else %}
      Welcome to httpd
    {% endif %}
---
- name: work with template file
  vars:
    apache_package: 'httpd'
  hosts: ansible2
  tasks:
  - template:
      src: if.j2
      dest: /tmp/httpd.conf
[ansible@control ~]$ ansible ansible2 -a "cat /tmp/httpd.conf"
ansible2 | CHANGED | rc=0 >>
  Welcome to httpd

Using Filters

  • In Jinja2 templates, you can use filters.
  • Filters are a way to perform an operation on the value of a template expression, such as a variable.
  • The filter is included in the variable definition itself, and the result of the variable and its filter is used in the file that is generated.

Common filters {{ myvar | to_json }}

  • writes the contents of myvar in JSON format {{ myvar || to_yaml }}
  • writes the contents of myvar in YAML format {{ myvar | ipaddr }}
  • tests whether myvar contains an IP address

From https://docs.ansible.com:

How do I loop over a list of hosts in a group, inside of a template?

A pretty common pattern is to iterate over a list of hosts inside of a host group, perhaps to populate a template configuration file with a list of servers. To do this, you can just access the “$groups” dictionary in your template, like this:

{% for host in groups['db_servers'] %}
    {{ host }}
{% endfor %}

If you need to access facts about these hosts, for example, the IP address of each hostname, you need to make sure that the facts have been populated. For example, make sure you have a play that talks to db_servers:

- hosts:  db_servers
  tasks:
    - debug: msg="doesn't matter what you do, just that they were talked to previously."

Then you can use the facts inside your template, like this:

{% for host in groups['db_servers'] %}
   {{ hostvars[host]['ansible_eth0']['ipv4']['address'] }}
{% endfor %}

Lab: Working with Conditional Statements in Templates

1. Use your editor to create the file exercise83.j2. Include the following line to open the Jinja2 conditional statement:

{% for host in groups['all'] %}

2. This statement defines a variable with the name host. This variable iterates over the magic variable groups, which holds all Ansible host groups as defined in inventory. Of these groups, the all group (which holds all inventory host names) is processed.

3. Add the following line (write it as one line; it will wrap over two lines, but do not press Enter to insert a newline character):

{{ hostvars[host]['ansible_default_ipv4']['address'] }} {{ hostvars[host]['ansible_fqdn'] }} {{ hostvars[host]['ansible_hostname'] }}
  • This line writes a single line for each inventory host, containing three items.
  • To do so, you use the magic variable hostvars, which can be used to identify Ansible facts that were discovered on the inventory host.
  • The [host] part is replaced with the name of the current host, and after that, the specific facts are referred to. As a result, for each host a line is produced that holds the IP address, the FQDN, and next the host name.

4. Add the following line to close the for loop:

{% endfor %}

5. Verify that the complete file contents look like the following and write and quit the file:

{% for host in groups['all'] %}
{{ hostvars[host]['ansible_default_ipv4']['address'] }} {{ hostvars[host]['ansible_fqdn'] }} {{ hostvars[host]['ansible_hostname'] }}
{% endfor %}

6. Use your editor to create the file exercise83.yaml. It should contain the following lines:

---
- name: generate /etc/hosts file
  hosts: all
  tasks:
  - name:
    template:
      src: exercise83.j2
      dest: /tmp/hosts

7. Run the playbook by using ansible-playbook exercise83.yaml

8. Verify the /tmp/hosts file was generated by using ansible all -a "cat /tmp/hosts"

This lab only worked if every host in the inventory file was reachable.

Lab: Generate an /etc/hosts File

Write a playbook that generates an /etc/hosts file on all managed hosts. Apply the following requirements:

• All hosts that are defined in inventory should be added to the /etc/hosts file.

[ansible@control ~]$ cat hostfile.yaml
---
- name: generate /etc/hosts
  hosts: all
  gather_facts: yes
  tasks: 
  - name: Generate hosts file with template
    template: 
      src: hosts.j2
      dest: /etc/hosts
[ansible@control ~]$ cat hosts.j2
{% for host in groups['all'] %}
{{ hostvars[host]['ansible_default_ipv4']['address'] }} {{ hostvars[host]['ansible_fqdn'] }} {{ hostvars[host]['ansible_hostname'] }}
{% endfor %}}

Lab: Manage a vsftpd Service

  • Write a playbook that uses at least two plays to install a vsftpd service
  • configure the vsftpd service using templates
  • configure permissions as well as SELinux.
  • Install, start, and enable the vsftpd service.
  • open a port in the firewall to make it accessible.
  • Use the /etc/vsftpd/vsftpd.conf file to generate a template.
  • In this template, you should use the following variables to configure specific settings.
  • Replace these settings with the variables and leave all else unmodified:
Anonymous_enable: yes
Local_enable: yes
Write_enable: yes
Anon_upload_enable: yes
  • Set permissions on the /var/ftp/pub directory to mode 0777.
  • Configure the ftpd_anon_write Boolean to allow anonymous user writes.
  • Set the public_content_rw_t SELinux context type to the /var/ftp/pub directory.
  • If any additional tasks are required to get this done, take care of them.

vim vsftpd.yaml

---
- name: manage vsftpd
  hosts: ansible1
  vars:
    anonymous_enable: yes
    local_enable: yes
    write_enable: yes
    Anon_upload_enable: yes
  tasks:
    - name: install vsftpd
      dnf:
        name: vsftpd
        state: latest

    - name: configure vsftpd configuration file
      template:
        src: vsftpd.j2
        dest: /etc/vsftpd/vsftpd.conf

- name: apply permissions
  hosts: ansible1
  tasks:
    - name: set folder permissions to /var/ftp/pub
      file:
        path: /var/ftp/pub
        mode: 0777

    - name: set ftpd_anon_write boolean
      seboolean:
        name: ftpd_anon_write
        state: yes
        persistent: yes

    - name: set public_content_rw_t SELinux context type to /var/ftp/pub directory
      sefcontext:
        target: '/var/ftp/pub(/.*)?'
        setype: public_content_rw_t
        state: present
      notify: restore selinux contexts

    - name: firewall stuff
      firewalld:
        service: ftp
        state: enabled
        permanent: true
        immediate: true

    - name: start and enable fsftpd
      service: 
        name: vsftpd
        state: started
        enabled: yes

  handlers:
    - name: restore selinux contexts
      command: restorecon -v /var/ftp/pub

vsftpd.j2

{{ ansible_managed }}

anonymous_enable={{ anonymous_enable }}
local_enable={{ local_enable }}
write_enable={{ write_enable }}
Anon_upload_enable{{ Anon_upload_enable }}
local_umask=022
dirmessage_enable=YES
xferlog_enable=YES
connect_from_port_20=YES
xferlog_std_format=YES
listen=NO
listen_ipv6=YES
pam_service_name=vsftpd
userlist_enable=YES

Managing Ansible Errors and Logs

Managing Ansible Errors and Logs

Using Check Mode

Before actually running a playbook in a way that all changes are implemented, you can start the playbooks in check mode. To do this, you use the --check or -C command-line argument to the ansible or ansible-playbook command. The effect of using check mode is that changes that would have been made are shown but not executed. You should realize, though, that check mode is not supported in all cases. You will, for instance, have problems with check mode if it is applied to conditionals, where a specific task can do its work only after a preceding task has made some changes. Also, to successfully use check mode, the modules need to support it, but some don’t. Modules that don’t support check mode don’t show any result while running check mode, but also they don’t make any changes.

Apart from the command-line argument, you can use check_mode: yes or check_mode: no with any task in a playbook. If check_mode: yes is used, the task always runs in check mode (and does not implement any changes), regardless of the use of the --check option. If a task has check_mode: no set, it never runs in check mode and just does its work, even if the ansible-playbook command is used with the --check option. Using check mode on individual tasks might be a good idea if using check mode on the entire playbook gives unpredicted results: you can enable it on just a couple of tasks to ensure that they run successfully before proceeding to the next set of tasks. Notice that using check_mode: no for specific tasks can be dangerous; these tasks will make changes, even if the entire playbook was started with the --check option!

::: note


Note

The check_mode argument is a replacement for the always_run option that was used in Ansible 2.5 and earlier. In current Ansible versions, you should not use always_run anymore.

Another option that is commonly used with the --check option is --diff. This option reports changes to template files without actually applying them. Listing 11-1 shows a sample playbook, Listing 11-2 shows the template that it is processing, and Listing 11-3 shows the result of running this playbook with the ansible-playbook listing111.yaml --check --diff command.


    ---
    - name: simple template example
      hosts: ansible2
      tasks:
      - template:
          src: listing112.j2
          dest: /etc/issue
:::

**Listing 11-2** Sample Template File

::: pre_1
    {# /etc/issue #}
    Welcome to {{ ansible_facts[’hostname’] }}
:::

**Listing 11-3** Running the listing111.yaml Sample Playbook

::: pre_1
    [ansible@control rhce8-book]$ ansible-playbook listing111.yaml --check --diff
    
    PLAY [simple template example] *************************************************
    
    TASK [Gathering Facts] *********************************************************
    ok: [ansible2]
    
    TASK [template] ****************************************************************
    --- before
    +++ after: /home/ansible/.ansible/tmp/ansible-local-4493uxbpju1e/tmpm5gn7crg/listing112.j2
    @@ -0,0 +1,3 @@
    +Welcome to ansible2
    +
    +
    
    changed: [ansible2]
    
    PLAY RECAP *********************************************************************
    ansible2                   : ok=2    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

Understanding Output

When you run the ansible-playbook command, output is generated. You’ve probably had a glimpse of it before, but let’s look at the output in a more structured way now. Listing 11-4 shows some typical sample output generated by running the ansible-playbook command.

Listing 11-4 ansible-playbook Command Output

::: pre_1 [ansible@control rhce8-book]$ ansible-playbook listing52.yaml

PLAY [install start and enable httpd] ******************************************

TASK [Gathering Facts] *********************************************************
ok: [ansible2]
ok: [ansible1]
ok: [ansible3]
ok: [ansible4]

TASK [install package] *********************************************************
changed: [ansible2]
changed: [ansible1]
changed: [ansible3]
changed: [ansible4]

TASK [start and enable service] ************************************************
changed: [ansible2]
changed: [ansible1]
changed: [ansible3]
changed: [ansible4]

PLAY RECAP *********************************************************************
ansible1                   : ok=3    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
ansible2                   : ok=3    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
ansible3                   : ok=3    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
ansible4                   : ok=3    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

:::

In the output of any ansible-playbook command, you can see different items:

Image Image{width=“64” height=“51”}

• An indicator of the play that is started

• If not disabled, the Gathering Facts task that is executed for each play

• Each individual task, including the task name if that was specified

• The Play Recap, which summarizes the play results

In the Play Recap, different results can be shown. Table 11-2 gives an overview.

::: group Table 11-2 Playbook Recap Overview

Image Image{width=“941” height=“338”} :::

As discussed before, when you use the ansible-playbook command, you can increase the output verbosity level using one or more -v options. Table 11-3 lists what these options accomplish. For generic troubleshooting, you might want to consider using -vv, which shows output as well as input data. In particular cases using the -vvv option can be useful because it adds connection information as well.

The -vvvv option just brings too much information in many cases but can be useful if you need to analyze which exact scripts are executed or whether any problems were encountered in privilege escalation. Make sure to capture the output of any command that runs with -vvvv to a text file, though, so that you can read it in an easy way. Even for a simple playbook, it can easily generate more than 10 screens of output.

::: group Table 11-3 Verbosity Options Overview

Image Image{width=“941” height=“209”} :::

In Listing 11-5 you can see the output of a small playbook that runs different tasks on the managed hosts. Listing 11-5 shows details about execution of one task on host ansible4, and as you can see, it goes deep in the amount of detail that is shown. One component is worth looking at, and that is the escalation succeeded that you can see in the output. This means that privilege escalation was successful and tasks were executed because become_user was defined in ansible.cfg. Failing privilege escalation is one of the common reasons why playbook execution may go wrong, which is why it’s worth keeping an eye on this indicator.

Listing 11-5 Analyzing Partial -vvvv Output

    <ansible4> ESTABLISH SSH CONNECTION FOR USER: ansible
    <ansible4> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ’User="ansible"’ -o ConnectTimeout=10 -o ControlPath=/home/ansible/.ansible/cp/859d5267e3 ansible4 ’/bin/sh -c ’"’"’chmod u+x /home/ansible/.ansible/tmp/ansible-tmp-1587544652.4716983-118789810824208/ /home/ansible/.ansible/tmp/ansible-tmp-1587544652.4716983-118789810824208/AnsiballZ_systemd.py && sleep 0’"’"’’
    Escalation succeeded
    <ansible4> (0, b’’, b"OpenSSH_8.0p1, OpenSSL 1.1.1c FIPS  28 May 2019\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug3: /etc/ssh/ssh_config line 51: Including file /etc/ssh/ssh_config.d/05-redhat.conf depth 0\r\ndebug1: Reading configuration data /etc/ssh/ssh_config.d/05-redhat.conf\r\ndebug2: checking match for ’final all’ host ansible4 originally ansible4\r\ndebug3: /etc/ssh/ssh_config.d/05-redhat.conf line 3: not matched ’final’\r\ndebug2: match not found\r\ndebug3: /etc/ssh/ssh_config.d/05-redhat.conf line 5: Including file /etc/crypto-policies/back-ends/openssh.config depth 1 (parse only)\r\ndebug1: Reading configuration data /etc/crypto-policies/back-ends/openssh.config\r\ndebug3: gss kex names ok: [gss-gex-sha1-,gss-group14-sha1-]\r\ndebug3: kex names ok: [curve25519-sha256,curve25519-sha256@libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1]\r\ndebug1: configuration requests final Match pass\r\ndebug1: re-parsing configuration\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug3: /etc/ssh/ssh_config line 51: Including file /etc/ssh/ssh_config.d/05-redhat.conf depth 0\r\ndebug1: Reading configuration data /etc/ssh/ssh_config.d/05-redhat.conf\r\ndebug2: checking match for ’final all’ host ansible4 originally ansible4\r\ndebug3: /etc/ssh/ssh_config.d/05-redhat.conf line 3: matched ’final’\r\ndebug2: match found\r\ndebug3: /etc/ssh/ssh_config.d/05-redhat.conf line 5: Including file /etc/crypto-policies/back-ends/openssh.config depth 1\r\ndebug1: Reading configuration data /etc/crypto-policies/back-ends/openssh.config\r\ndebug3: gss kex names ok: [gss-gex-sha1-,gss-group14-sha1-]\r\ndebug3: kex names ok: [curve25519-sha256,curve25519-sha256@libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1]\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 4 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 4764\r\ndebug3: mux_client_request_session: session request sent\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Received exit status from master 0\r\n")
    <ansible4> ESTABLISH SSH CONNECTION FOR USER: ansible
    <ansible4> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ’User="ansible"’ -o ConnectTimeout=10 -o ControlPath=/home/ansible/.ansible/cp/859d5267e3 -tt ansible4 ’/bin/sh -c ’"’"’sudo -H -S -n  -u root /bin/sh -c ’"’"’"’"’"’"’"’"’echo BECOME-SUCCESS-muvtpdvqkslnlegyhoibfcrilvlyjcqp ; /usr/libexec/platform-python /home/ansible/.ansible/tmp/ansible-tmp-1587544652.4716983-118789810824208/AnsiballZ_systemd.py’"’"’"’"’"’"’"’"’ && sleep 0’"’"’’
    Escalation succeeded

Optimizing Command Output Error Formatting

You might have noticed that the formatting of error messages in Ansible command output can be a bit hard to read. Fortunately, there’s an easy way to make it a little more readable by including two options in the ansible.cfg file. These options are stdout_callback = debug and stdout_callback = error. After including these options, you’ll notice it’s a lot easier to read error output and distinguish between its different components!

Logging to Files

By default, Ansible does not write anything to log files. The reason is that the Ansible commands have all the options that may be useful to write output to the STDOUT. If so required, it’s always possible to use shell redirection to write the command output to a file.

If you do need Ansible to write log files, you can set the log_path parameter in ansible.cfg. Alternatively, Ansible can log to the filename that is specified as the argument to the $ANSIBLE_LOG_PATH variable. Notice that Ansible logs can grow big very fast, so if logging to output files is enabled, make sure that Linux log rotation is configured to ensure that files cannot grow beyond a specific maximum size.

Running Task by Task

When you analyze playbook behavior, it’s possible to run playbook tasks one by one or to start running a playbook at a specific task. The ansible-playbook --step command runs playbooks task by task and prompts for confirmation before running the next task. Alternatively, you can use the ansible-playbook --start-at-task="task name" command to start playbook execution as a specific task. Before using this command, you might want to use ansible-playbook --list-tasks for a list of all tasks that have been configured. To use these options in an efficient way, you must configure each task with its own name. In Listing 11-6 you can see what running playbooks this way looks like. This listing first shows how to list tasks in a playbook and next how the --start-at-task and --step options are used.

Listing 11-6 Running Tasks One by One

::: pre_1 [ansible@control rhce8-book]$ ansible-playbook –list-tasks exercise81.yaml

playbook: exercise81.yaml

  play #1 (ansible1): testing file manipulation skills.    TAGS: []
    tasks:
      create a new file              TAGS: []
      check status of the new file   TAGS: []
      for debugging purposes only    TAGS: []
      change file owner if needed    TAGS: []

  play #2 (ansible1): fetching a remote file.    TAGS: []
    tasks:
      fetch file from remote machine.    TAGS: []

  play #3 (localhost): adding text to the file that is now on localhost TAGS: []
    tasks:
      add a message.    TAGS: []

  play #4 (ansible2): copy the modified file to ansible2.    TAGS: []
    tasks:
      copy motd file.    TAGS: []
[ansible@control rhce8-book]$ ansible-playbook --start-at-task "add a message"  --step exercise81.yaml

PLAY [testing file manipulation skills] ****************************************

PLAY [fetching a remote file] **************************************************

PLAY [adding text to the file that is now on localhost] ************************
Perform task: TASK: Gathering Facts (N)o/(y)es/(c)ontinue:

:::

In Exercise 11-1 you learn how to apply check mode while working with templates.

::: box Exercise 11-1 Using Templates in Check Mode

1. Locate the file httpd.conf; you can find it in the rhce8-book directory, which you can download from the GitHub repository at https://github.com/sandervanvugt/rhce8-book. Use mv httpd.conf exercise111-httpd.j2 to rename it to a Jinja2 template file.

2. Open the exercise111-httpd.j2 file with an editor, and apply modifications to existing parameters so that they look like the following:

ServerRoot "{{ apache_root }}"
User {{ apache_user }}
Group {{ apache_group }}

3. Write a playbook that takes care of the complete Apache web server setup and installation, starts and enables the service, opens a port in the firewall, and uses the template module to create the /etc/httpd/conf/httpd.conf file based on the template that you created in step 2 of this exercise. The complete playbook with the name exercise111.yaml looks like the following (make sure you have the exact contents shown below and do not correct any typos):

---
- name: perform basic apache setup
  hosts: ansible2
  vars:
    apache_root: /etc/httpd
    apache_user: httpd
    apache_group: httpd
  tasks:
  - name: install RPM package
    yum:
      name: httpd
      state: latest
  - name: copy template file
    template:
      src: exercise111-httpd.j2
      dest: /etc/httpd/httpd.conf
  - name: start and enable service
    service:
      name: httpd
      state: started
      enabled: yes
  - name: open port in firewall
    firewalld:
      service: http
      permanent: yes
      state: enabled
      immediate: yes

4. Run the command ansible-playbook --syntax-check exercise111.yaml. If no errors are found in the playbook syntax, you should just see the name of the playbook.

5. Run the command ansible-playbook --check --diff exercise111.yaml. In the output of the command, pay attention to the task copy template file. After the line that starts with +++ after, you should see the lines in the template that were configured to use a variable, using the right variables.

6. Run the playbook to perform all its tasks step by step, using the command ansible-playbook --step exercise111.yaml. Press y to confirm the first step. Next, press c to automatically continue. The playbook will fail on the copy template file task because the target directory does not exist. Notice that the --syntax-check and the --check options do not check for any logical errors in the playbook and for that reason have not detected this problem.

7. Edit the exercise111.yaml file and ensure the template task contains the following corrected line: (replace the old line starting with dest:):

dest: /etc/httpd/conf/httpd.conf

8. Type ansible-playbook --list-tasks exercise111.yaml to list all the tasks in the playbook.

9. To avoid running the entire playbook again, use ansible-playbook --start-at-task="copy template file" exercise111.yaml to run the playbook to completion. :::

Managing Packages

Using Modules to Manage Packages

Modules used to manage packages: Image Image

Configuring Repository Access

The yum_repository module lets you work with yum repository files in the /etc/yum.repos.d/ directory.

    ---
    - name: setting up repository access
      hosts: all
      tasks:
      - name: connect to example repo
        yum_repository:
          name: example repo
          description: RHCE8 example repo
          file: examplerepo
          baseurl: ftp://control.example.com/repo/
          gpgcheck: no

yum_repository Key Arguments

Image Image

Notice that use of the gpgcheck argument is recommended but not mandatory. Most repositories are provided with a GPG key to verify that packages in the repository have not been tampered with. However, if no GPG key is set up for the repository, the gpgcheck parameter can be set to no to skip checking the GPG key.

Managing Software with yum

The yum module can be used to manage software packages. You use it to install and remove packages or to update packages. This can be done for individual packages, as well as package groups and modules. Let’s look at some examples that go beyond the mere installation or removal of packages, which was covered sufficiently in earlier chapters.

Listing 12-2 shows a module that will update all packages on this system.

Listing 12-2 Using yum to Perform a System Update

::: pre_1 — - name: updating all packages hosts: ansible2 tasks: - name: system update yum: name: ’*’ state: latest :::

Notice the use of the name argument to the yum module. It has ’*’ as its argument. To prevent the wildcard from being interpreted by the shell, you must make sure it is placed between single quotes.

Listing 12-3 shows an example where yum package groups are used to install the Virtualization Host package group.

Listing 12-3 Installing Package Groups

::: pre_1 — - name: install or update a package group hosts: ansible2 tasks: - name: install or update a package group yum: name: ’@Virtualization Host’ state: latest :::

When a yum package group instead of an individual package needs to be installed, the name of the package group needs to start with an at sign (@), and the entire package group name needs to be put between single quotes. Also notice the use of state: latest in Listing 12-3. This line ensures that the packages in the package group are installed if they have not been installed yet. If they have already been installed, they are updated to the latest version.

A new feature in RHEL 8 is the yum AppStream module. Modules as listed by the Linux yum modules list command can be managed with the Ansible yum module also. Working with yum modules is similar to working with yum package groups. In the example in Listing 12-4, the main difference is that a version number and the installation profile are included in the module name.

Listing 12-4 Installing AppStream Modules with the yum Module

::: pre_1 — - name: installing an AppStream module hosts: ansible2 tasks: - name: install or update an AppStream module yum: name: ’@php:7.3/devel’ state: present :::

::: note


Note

When using the yum module to install multiple packages, you can provide the name argument with a list of multiple packages. Alternatively, you can provide multiple packages in a loop. Of these solutions, using a list of multiple packages as the argument to name is always preferred. If multiple package names are provided in a loop, the module must execute a task for every single package. If multiple package names are provided as the argument to name, yum can install all these packages in one single task.


:::

Managing Package Facts

When Ansible is gathering facts, package facts are not included. To include package facts as well, you need to run a separate task; that is, you need to use the package_facts module. Facts that have been gathered about packages are stored to the ansible_facts.packages variable. The sample playbook in Listing 12-5 shows how to use the package_facts module.

Listing 12-5 Using the package_facts Module to Show Package Details

::: pre_1 — - name: using package facts hosts: ansible2 vars: my_package: nmap tasks: - name: install package yum: name: “{{ my_package }}” state: present - name: update package facts package_facts: manager: auto - name: show package facts for {{ my_package }} debug: var: ansible_facts.packages[my_package] when: my_package in ansible_facts.packages :::

As you can see, the package_facts module does not need much to do its work. The only argument used here is the manager argument, which specifies which package manager to communicate to. Its default value of auto automatically detects the appropriate package manager and uses that. If you want, you can specify the package manager manually, using any package manager such as yum or dnf. Listing 12-6 shows the output of running the Listing 12-5 playbook, where you can see details that are collected by the package_facts module.

Listing 12-6 Running ansible-playbook listing125.yaml Results

::: pre_1 [ansible@control rhce8-book]$ ansible-playbook listing125.yaml

PLAY [using package facts] **************************************************************

TASK [Gathering Facts] ******************************************************************
ok: [ansible2]

TASK [install package] ******************************************************************
ok: [ansible2]

TASK [update package facts] *************************************************************
ok: [ansible2]

TASK [show package facts for my_package] ************************************************
ok: [ansible2] => {
    "ansible_facts.packages[my_package]": [
        {
            "arch": "x86_64",
            "epoch": 2,
            "name": "nmap",
            "release": "5.el8",
            "source": "rpm",
            "version": "7.70"
        }
    ]
}

PLAY RECAP ******************************************************************************
ansible2                   : ok=4    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

:::

In Exercise 12-1 you can practice working with the different tools Ansible provides for module management.

::: box Exercise 12-1 Managing Software Packages

1. Use your editor to create a new file with the name exercise121.yaml.

2. Write a play header that defines the variable my_package and sets its value to virt-manager:

---
- name: exercise121
  hosts: ansible2
  vars:
    my_package: virt-manager
  tasks:

3. Add a task that installs the package based on the name of the variable that was provided:

- name: install package
  yum:
    name: "{{ my_package }}"
    state: present

4. Add a task that gathers facts about installed packages:

- name: update package facts
  package_facts:
    manager: auto

5. As the last part of this exercise, add a task that shows facts about the package that you have just installed:

- name: show package facts for {{ my_package }}
  debug:
    var: ansible_facts.packages[my_package]
  when: my_package in ansible_facts.packages

6. Run the playbook using ansible-playbook exercise121.yaml and verify its output. :::

Managing Partitions and LVM

Managing Partitions and LVM

After detecting the disk device that needs to be used, you can move on and start creating partitions and logical volumes.

  • partition a disk using the parted module,
  • work with the lvg and lvol modules to manage LVM logical volumes,
  • create file systems using the filesystem module and mount them using the mount module
  • manage swap storage.

Creating Partitions

Parted Module name:

  • Assign unique name, required for GPT partitions label:
  • type of partition table, msdos is default, gpt for gpt device:
  • Device where you are creating partition number:
  • partition number state:
  • present or absent to add/remove

part_start:

  • Starting position expressed as an offset from the beginning of the disk part_end:
  • Where to end the partition If these arguments are not used, the partition starts at 0% and ends at 100% of the available disk space.

flags:

  • Set specific partition properties such as LVM partition type.
  • Required for LVM partition type
      - name: create new partition
        parted:
          name: files
          label: gpt
          device: /dev/sdb
          number: 1
          state: present
          part_start: 1MiB
          part_end: 2GiB
      - name: create another new partition
        parted:
          name: swap
          label: gpt
          device: /dev/sdb
          number: 2
          state: present
          part_start: 2GiB
          part_end: 4GiB
          flags: [ lvm ]

Managing Volume Groups and LVM Logical Volumes

lvg module

  • manage LVM logical volumes
  • managing LVM volume groups

lvol module

  • managing LVM logical volumes.

Creating an LVM volume group

  • vg argument to set the name of the volume group
  • pvs argument to identify the physical volume (which is often a partition or a disk device) on which the volume group needs to be created.
  • May need to specify the pesize to refer to the size of the physical extents.
- name: create a volume group
  lvg:
    vg: vgdata
    pesize: "8"
    pvs: /dev/sdb1

After you create an LVM volume group, you can create LVM logical volumes.

lvol Common Options: lv

  • Name of the LV pvs
  • comma separated list of pvs, if it is a partition then it should have the lvm option set resizefs
  • Indicates whether to resize filesystem when the lv is expanded size
  • size of the lv snapshot
  • specify name if this lv is a snapshot vg
  • VG is which the lv should be created

Creating an LVM Logical Volume

- name: create a logical volume
    lvol:
      lv: lvdata
      size: 100%FREE
      vg: vgdata

Creating and Mounting File Systems

filesystem module

  • Supports creating as well as resizing file systems.

Options: dev

  • block device name fstype
  • filesystem type opts
  • options passed to mkfs command resizefs
  • Extends the filesystem if set to yes. Extended to the current block size

Creating an XFS File System

- name: create an XFS filesystem
  filesystem:
    dev: /dev/vgdata/lvdata
    fstype: xfs

Mounting a filesystem

mount module.

  • Used to mount a filesystem

Options: fstype

  • Filesystem type is not automatically dedected.
  • Used to specify filesystem type path
  • directory to mount the filesystem to src
  • device to be mounted state
  • Current mount state
  • mounted to mount device now
  • present to set in /etc/fstab but not mount it now
      - name: mount the filesystem
        mount:
          src: /dev/vgdata/lvdata
          fstype: xfs
          state: mounted
          path: /mydir

Configuring Swap Space

  • To set up swap space, you first must format a device as swap space and next mount the swap space.

  • To format a device as swap space, you use the filesystem module.

  • There is no specific Ansible module to activate theswap space, so you use the command module to run the Linux swapon command.

  • Because adding swap space is not always required, it can be done in a conditional statement.

  • In the statement, use the ansible_swaptotal_mb fact to discover how much swap is actually available.

  • If that amount falls below a specific threshold, the swap space can be created and activated.

A conditional check is performed, and additional swap space is configured if the current amount of swap space is lower than 256 MiB.

    ---
    - name: configure swap storage
      hosts: ansible2
      tasks:
      - name: setup swap
        block:
        - name: make the swap filesystem
          filesystem:
            fstype: swap
            dev: /dev/sdb1
        - name: activate swap space
          command: swapon /dev/sdb1
        when: ansible_swaptotal_mb < 256

Run an ad hoc command to ensure that /dev/sdb on the target host is empty:

ansible ansible2 -a "dd if=/dev/zero of=/dev/sdb bs=1M count=10"

To make sure that you don’t get any errors about partitions that are in use, also reboot the target host:

ansible ansible2 -m reboot
  • Lack of idempotency if the size is specified as 100%FREE, which is a relative value, not an absolute value.
  • This value works the first time you run the playbook, but it does not the second time you run the playbook.
  • Because no free space is available, the LVM layer interprets the task as if you wanted to create a logical volume with a size of 0 MiB and will complain about that. To ensure that plays are written in an idempotent way, make sure that you use absolute values, not relative values.

Managing Services

Managing Services

Services can be managed in many ways. You can manage systemd services, but Ansible also allows for management of tasks using Linux cron and at. Apart from that, you can use Ansible to manage the desired systemd target that a managed system should be started in, and it can reboot running machines. Table 14-2 gives an overview of the most significant modules for managing services.

Table 14-2 Modules Related to Service Management

Image Image

Managing Systemd Services

Throughout this book you have used the service module a lot. This module enables you to manage services, regardless of the init system that is used, so it works with System-V init, with Upstart, as well as systemd. In many cases, you can use the service module for any service-related task.

If systemd specifics need to be addressed, you must use the systemd module instead of the service module. Such systemd-specific features include daemon_reload and mask. The daemon_reload feature forces the systemd daemon to reread its configuration files, which is useful after applying changes (or after editing the service files directory, without using the Linux systemctl command). The mask feature marks a systemd service in such a way that it cannot be started, not even by accident. Listing 14-1 shows an example where the systemd module is used to manage services.

Listing 14-1 Using systemd Module Features

::: pre_1 — - name: using systemd module to manage services hosts: ansible2 tasks: - name: enable service httpd and ensure it is not masked systemd: name: httpd enabled: yes masked: no daemon_reload: yes :::

Given the large amount of functionality that is available in systemd, the functions that are offered by the systemd services are a bit limited, and for many specific features, you must use generic modules such as file and command instead. An example is setting the default target, which is done by creating a symbolic link using the file module.

Managing cron Jobs

The cron module can be used to manage cron jobs. A Linux cron job is one that is periodically executed by the Linux crond daemon at a specific time. The cron module can manage jobs in different ways:

• Write the job directly to a user’s crontab

• Write the job to /etc/crontab or under the /etc/cron.d directory

• Pass the job to anacron so that it will be run once an hour, day, week, month, or year without specifically defining when exactly

If you are familiar with Linux cron, using the Ansible cron module is straightforward. Listing 14-2 shows an example that runs the fstrim command every day at 4:05 and at 19:05.

Listing 14-2 Running a cron Job

---
- name: run a cron job
  hosts: ansible2
  tasks:
  - name: run a periodic job
    cron:
      name: "run fstrim"
      minute: "5"
      hour: "4,19"
      job: "fstrim"

As a result of this playbook, a crontab file is created for user root. To create a crontab file for another user, you can use the user attribute. Notice that while managing cron jobs using the cron module, a name attribute is specified. This attribute is required for Ansible to manage the cron jobs and has no meaning for Linux crontab itself. If, for instance, you later want to remove a cron job, you must use the name of the job as an identifier.

Listing 14-3 shows a sample playbook that removes the job that was created in Listing 14-2. Notice that it just specifies state: absent as well as the name of the job that was previously created; no other parameters are required.

Listing 14-3 Removing a cron Job Using the name Attribute

::: pre_1 — - name: run a cron job hosts: ansible2 tasks: - name: run a periodic job cron: name: “run fstrim” state: absent :::

Managing at Jobs

Whereas you use Linux cron to schedule tasks at a regular interval, you use Linux at to manage tasks that need to run once only. To interface with Linux at, the Ansible at module is provided. Table 14-3 gives an overview of the arguments it takes to specify how the task should be executed.

::: group Table 14-3 at Module Arguments Overview

Image Image

The most important point to understand when working with at is that it is used to defined how far from now a task has to be executed. This is done using count and units. If, for example, you want to run a task five minutes from now, you specify the job with the arguments count: 5 and units: minutes. Also notice the use of the unique argument. If set to yes, the task is ignored if a similar job is scheduled to run already. Listing 14-4 shows an example.

Listing 14-4 Running Commands in the Future with at

::: pre_1 — - name: run an at task hosts: ansible2 tasks: - name: run command and write output to file at: command: “date > /tmp/my-at-file” count: 5 units: minutes unique: yes state: present :::

In Exercise 14-1 you practice your skills working with the cron module.

::: box Exercise 14-1 Managing cron Jobs

1. Use your editor to create the playbook exercise141-1.yaml and give it the following contents:

---
- name: run a cron job
  hosts: ansible2
  tasks:
  - name: run a periodic job
    cron:
      name: "run logger"
      minute: "0"
      hour: "5"
      job: "logger IT IS 5 AM"

2. Use ansible-playbook exercise141-1.yaml to run the job.

3. Use the command ansible ansible2 -a “crontab -l” to verify the cron job has been added. The output should look as follows:

ansible2 | CHANGED | rc=0 >>
#Ansible: run logger
0 5 * * * logger IT IS 5 AM

4. Create a new playbook with the name exercise141-2 that runs a new cron job but uses the same name:

---
- name: run a cron job
  hosts: ansible2
  tasks:
  - name: run a periodic job
    cron:
      name: "run logger"
      minute: "0"
      hour: "6"
      job: "logger IT IS 6 AM"

5. Run this new playbook by using ansible-playbook exercise141-2.yaml. Notice that the job runs with a changed status.

6. Repeat the command ansible ansible2 -a “crontab -l”. This shows you that the new cron job has overwritten the old job because it was using the same name. Here is something important to remember: all cron jobs should have a unique name!

7. Write the playbook exercise141-3.yaml to remove the cron job that you just created:

---
- name: run a cron job
  hosts: ansible2
  tasks:
  - name: run logger
    cron:
      name: "run logger"
      state: absent

8. Use ansible-playbook exercise141-3.yaml to run the last playbook. Next, use ansible ansible2 -a “crontab -l” to verify that the cron job was indeed removed.

Networking with Ansible

3 modules for managing the networking on nodes:

  • service
  • daemon
  • system settings

NFS Setup

Server hosting the storage:

--- 
  - name: Install Packages
    package:
      name:
        - nfs-utils
      state: present

  - name: Ensure directories to export exist
    file:  # noqa 208
      path: "{{ item }}"
      state: directory
    with_items: "{{ nfs_exports | map('split') | map('first') | unique }}"
  
  - name: Copy exports file
    template:
      src: exports.j2
      dest: /etc/exports
      owner: root
      group: root
      mode: 0644
    notify: reload nfs

  - name: Add firewall rule to enable NFS service
    ansible.posix.firewalld:
      immediate: true
      state: enabled
      permanent: true
      service: nfs
    notify: reload firewalld

  - name: Start and enable NFS service
    service:
      name: nfs-server
      state: started
      enabled: yes
     when: nfs_exports|length > 0

  - name: Set SELinux boolean for NFS
    ansible.posix.seboolean:
      name: nfs_export_all_rw
      state: yes
      persistent: yes

  - name: install required package for sefcontext module
    yum:
      name: policycoreutils-python-utils
      state: present

  - name: Set proper SELinux context on export dir
    sefcontext:
      target: /{{ item }}(/.*)?
      setype: nfs_t
      state: present
    notify: run restorecon
    with_items: "{{ nfs_exports | map('split') | map('first') | unique }}"
{% for host in nfs_hosts %}
/data {{ host }} (rw,wdelay,root_squash,no_subtree_check,sec=sys,rw,root_squash,no_all_squash)
{% endfor %}

Variables: nfs_exports:

  • /data server(rw,wdelay,root_squash,no_subtree_check,sec=sys,rw,root_squash,no_all_squash)

Handlers

---
- name: reload nfs
  command: 'exportfs -ra'
  
- name: reload firewalld
  command: firewall-cmd --reload

- name: run restorecon
  command: restorecon -Rv /codata

storage:

 name: Detect secondary disk name
    ignore_errors: yes
    set_fact:
      disk2name: vda
    when: ansible_facts['devices']['vda'] is defined

  - name: Search for second disk, continue only if it is found
    assert:
      that:
        - disk2name is defined
      fail_msg: second hard disk not found

  - name: Debug detected disk
    debug:
      msg: "{{ disk2name }} was found. Moving forward."  

  - name: Create LVM and partitions
    block:
    - name: Create LVM Partition on second disk
      parted: 
        name: data
        label: gpt
        device: /dev/{{ disk2name }}
        number: 1
        state: present
        flags: [ lvm ]

    - name: Create an LVM volume group
      lvg:
        vg: vgcodata
        pvs: /dev/{{ disk2name }}1

    - name: Create lv
      lvol:
        lv: lvdata
        size: 100%FREE
        vg: vgdata

    - name: create filesystem
      filesystem:
        dev: /dev/vgdata/lvdata
        fstype: xfs

    when: ansible_facts['devices']['vda']['partitions'] is not defined
    
  - name: Create data directory
    file:
      dest: /data
      mode: 777
      state: directory
    
  - name: Mount the filesystem
    mount:
      src: /dev/vgdata/lvdata
      fstype: xfs
      state: present
      path: /data
  
  - name: Set permissions on mounted filesystem
    file:
      path: /data
      state: directory
      mode: '0777'
    ```

Optimizing Ansible Processing

Optimizing Ansible Processing

Parallel task execution

  • manages the number of hosts on which tasks are executed simultaneously. Serial task execution
  • tasks are executed on a host or group of hosts before proceeding to the next host or group of hosts.

Parallel Task Execution

  • Ansible can run tasks on all hosts at the same time, and in many cases that would not be a problem because processing is executed on the managed host anyway.
  • If, however, network devices or other nodes that do not have their own Python stack are involved, processing needs to be done on the control host.
  • To prevent the control host from being overloaded in that case, the maximum number of simultaneous connections by default is set to 5.
  • You can manage this setting by using the forks parameter in ansible.cfg.
  • Alternatively, you can use the -f option with the ansible and ansible-playbook commands.
  • If only Linux hosts are managed, there is no reason to keep the maximum number of simultaneous tasks much lower than 100.

Managing Serial Task Execution

  • While executing tasks, Ansible processes tasks in a playbook one by one.
  • This means that, by default, the first task is executed on all managed hosts. Once that is done, the next task is processed, until all tasks have been executed.
  • There is no specific order in the execution of tasks, so you may see that in one run ansible1 is processed before ansible2, while on another run they might be processed in the opposite order.
  • In some cases, this is undesired behavior.
  • If, for instance, a playbook is used to update a cluster of hosts this way, this would create a situation where the old software has been updated, but the new version has not been started yet and the entire cluster would be down.
  • Use the serial keyword in the play header to configure
    • serial: 3
      • all tasks are executed on three hosts, and after completely running all tasks on three hosts, the next group of three hosts is handled.

Lab: Managing Parallelism

  • Add two more managed nodes with the names ansible3.example.com and ansible4.example.com.
  • Open the inventory file with an editor and add the following lines:
ansible3
ansible4
  • Open the ansible.cfg file and add the line forks = 4 to the [defaults] section.
  • Write a playbook with the name exercise102-install that installs and enables the Apache web server and another playbook with the name exercise102-remove that disables and removes the Apache web server.
  • Run ansible-playbook exercise102-remove.yaml to remove and disable the Apache web server on all hosts. This is just to make sure you start with a clean configuration.
  • Run the playbook to install and run the web server, using time ansible-playbook exercise102-install.yaml, and notice the time it takes to run the playbook.
  • Run ansible-playbook exercise102-remove.yaml again to get back to a clean state.
  • Edit ansible.cfg and change the forks parameter to forks = 2.
  • Run the time ansible-playbook exercise102-install.yaml command again to see how much time it takes now
  • Edit the exercise102-install.yaml playbook and include the line serial: 2 in the play header.
  • Run the ansible-playbook exercise102-remove.yaml command again to get back to a clean state.
  • Run the ansible-playbook exercise102-install.yaml command again and observe that the entire play is executed on two hosts only before the next group of two hosts is taken care of.

Repositories and subscriptions

Using Modules to Manage Repositories and Subscriptions

To work with software packages, you need to make sure that repositories are accessible and subscriptions are available. In the previous section you learned how to write a playbook that enables you to access an existing repository. In this section you learn how to set up the server part of a repository if that still needs to be done. Also, you learn how to manage RHEL subscriptions using Ansible.

Setting Up Repositories

Most managed systems access the default distributions that are provided while installing the operating system. In some cases external repositories might not be accessible. If that happens, you need to set up a repository yourself. Before you can do that, however, it’s important to know what a repository is. A repository is a directory that contains RPM files, as well as the repository metadata, which is an index that allows the repository client to figure out which packages are available in the repository.

Ansible does not provide a specific module to set up a repository. You must use a number of modules instead. Exactly which modules are involved depends on how you want to set up the repository. For instance, if you want to set up an FTP-based repository on the Ansible control host, you need to accomplish the following tasks:

• Install the FTP package.

• Start and enable the FTP server.

• Open the firewall for FTP traffic.

• Make sure the FTP shared repository directory is available.

• Download packages to the repository directory.

• Use the Linux createrepo command to generate the index that is required in each repository.

The playbook in Listing 12-7 provides an example of how this can be done.

Listing 12-7 Setting Up an FTP-based Repository

::: pre_1 - name: install FTP to export repo hosts: localhost tasks: - name: install FTP server yum: name: - vsftpd - createrepo_c state: latest - name: start FTP server service: name: vsftpd state: started enabled: yes - name: open firewall for FTP firewalld: service: ftp state: enabled permanent: yes

- name: setup the repo directory
  hosts: localhost
  tasks:
  - name: make directory
    file:
      path: /var/ftp/repo
      state: directory
  - name: download packages
    yum:
      name: nmap
      download_only: yes
      download_dir: /var/ftp/repo
  - name: createrepo
    command: createrepo /var/ftp/repo

:::

The most significant tasks in setting up the repository are the download packages and createrepo tasks. In the download packages task, the yum module is used to download a single package. To do so, the download_only argument is used to ensure that the package is not installed but downloaded to a directory. When you use the download_only argument, you also must specify where the package needs to be installed. To do this, the task uses the download_dir argument.

There is one disadvantage in using this approach to download the package, though: it requires repository access. If repository access is not available, the fetch module can be used instead to download a file from a specific URL.

Managing GPG Keys

To guarantee the integrity of packages, most repositories are set up with a GPG key. This enables the client to verify that packages have not been tampered with while transmitted between the repository server and client. For that reason, if packages are installed from a repository server on the Internet, you should always make sure that gpgcheck: yes is set while using the yum_repository module.

However, if you want to make sure that a GPG check is performed, you need to make sure the client knows where to fetch the repository key. To help with that, you can use the rpm_key module. You can see how to do this in Listing 12-8. Notice that the playbook in this listing doesn’t work because no GPG-protected repository is available. Setting up GPG-protected repositories is complex and outside the scope of the EX294 objectives, and for that reason is not covered here.

Listing 12-8 Using rpm_key to Fetch an RPM Key

::: pre_1 - name: use rpm_key in repository access hosts: all tasks: - name: get the GPG public key rpm_key: key: ftp://control.example.com/repo/RPM-GPG-KEY state: present - name: set up the repository client yum_repository: file: myrepo name: myrepo description: example repo baseurl: ftp://control.example.com/repo enabled: yes gpgcheck: yes state: present :::

Managing RHEL Subscriptions

When you work with Red Hat Enterprise Linux, configuring repository access using the method described before is not enough. Red Hat Enterprise Linux works with subscriptions, and to be able to access software that is provided through your subscription entitlement, you need to set up managed systems to access these subscriptions.

::: note


Tip

Free developer subscriptions are available for RHEL as well as Ansible. Register yourself at https://developers.redhat.com and sign up for a free subscription if you want to test the topics described in this section on RHEL and you don’t have a valid subscription yet.


:::

To understand how to use the Ansible modules to register a RHEL system, you need to understand how to use the Linux command-line utilities. When you are managing subscriptions from the Linux command line, multiple steps are involved.

1. First, you use the subscription-manager register command to provide your RHEL credentials. Use, for instance, subscription-manager register --username=yourname --password=yourpassword.

2. Next, you need to find out which pools are available in your account. A pool is a collection of software channels available to your account. Use subscription-manager list --available for an overview.

3. Now you can connect to a specific pool using subscription-manager attach --pool=poolID. Note that if only one subscription pool is available in your account, you don’t have to provide the --pool argument.

4. Next, you need to find out which additional repositories are available to your account by using subscription-manager repos --list.

5. To register to use additional repositories, you use subscription-manager repos --enable “repos name”. Your system then has full access to its subscription and related repositories.

Two significant modules are provided by Ansible:

redhat_subscription: This module enables you to perform subscription and registration in one task.

rhsm_repository: This module enables you to add subscription manager repositories.

Listing 12-9 shows an example of a playbook that uses these modules to fully register a new RHEL 8 machine and add a new repository to the managed machine. Notice that this playbook is not runnable as such because important additional information needs to be provided. Exercise 12-3, later in the section titled “Implementing a Playbook to Manage Software,” will guide you to a scenario that shows how to use this code in production.

Listing 12-9 Using Subscription Manager to Set Up Ansible

::: pre_1 — - name: use subscription manager to register and set up repos hosts: ansible5 tasks: - name: register and subscribe ansible5 redhat_subscription: username: bob@example.com password: verysecretpassword state: present - name: configure additional repo access rhsm_repository: name: - rh-gluster-3-client-for-rhel-8-x86_64-rpms - rhel-8-for-x86_64-appstream-debug-rpms state: present :::

In the sample playbook in Listing 12-9, you can see how the redhat_subscription and rhsm_repository modules are used. Notice that redhat_subscription requires a password. In Listing 12-9 the username and password are provided as clear-text values in the playbook. From a security perspective, this is very bad practice. You should use Ansible Vault instead. Exercise 12-3 will guide you through a setup where this is done.

In Exercise 12-2 you are guided through the procedure of setting up your own repository and using it. This procedure consists of two distinct parts. In the first part you set up a repository server that is based on FTP. Because in Ansible you often need to configure topics that don’t have your primary attention, you set up the FTP server and also change its configuration. Next, you write a second playbook that configures the clients with appropriate repository access, and after doing so, install a package.

::: box Exercise 12-2 Setting Up a Repository

1. Use your editor to create the file exercise122-server.yaml.

2. Define the play that sets up the basic FTP configuration. Because all its tasks should be familiar to you at this point, you can enter all the tasks at once:

---
- name: install, configure, start and enable FTP
  hosts: localhost
  tasks:
  - name: install FTP server
    yum:
      name: vsftpd
      state: latest
  - name: allow anonymous access to FTP
    lineinfile:
      path: /etc/vsftpd/vsftpd.conf
      regexp: ’^anonymous_enable=NO’
      line: anonymous_enable=YES
  - name: start FTP server
    service:
      name: vsftpd
      state: started
      enabled: yes
  - name: open firewall for FTP
    firewalld:
      service: ftp
      state: enabled
      immediate: yes
      permanent: yes

3. Set up a repository directory. Add the following play to the playbook. Notice the use of the download packages task, which uses the yum module to download a package without installing it. Also notice the createrepo task, which creates the repository metadata that converts the /var/ftp/repo directory into a repository.

- name: setup the repo directory
  hosts: localhost
  tasks:
  - name: make directory
    file:
      path: /var/ftp/repo
      state: directory
  - name: download packages
    yum:
      name: nmap
      download_only: yes
      download_dir: /var/ftp/repo
  - name: install createrepo package
    yum:
      name: createrepo_c
      state: latest
  - name: createrepo
    command: createrepo /var/ftp/repo
    notify:
    - restart_ftp
  handlers:
  - name: restart_ftp
    service:
      name: vsftpd
      state: restarted

4. Use the command ansible-playbook exercise122-server.yaml to set up the FTP server on control.example.com. If you haven’t made any typos, you shouldn’t encounter any errors.

5. Now that the repository server has been installed, it’s time to set up the repository client. Use your editor to create the file exercise122-client.yaml and write the play header as follows:

---
- name: configure repository
  hosts: all
  vars:
    my_package: nmap
  tasks:

6. Add a task that uses the yum_repository module to configure access to the new repository:

- name: connect to example repo
  yum_repository:
    name: exercise122
    description: RHCE8 exercise 122 repo
    file: exercise122
    baseurl: ftp://control.example.com/repo/
    gpgcheck: no

7. After setting up the repository client, you also need to make sure that the clients know how to reach the repository server by addressing its name. Add the next task that writes a new line to /etc/hosts to make sure host name resolving on the clients is set up correctly:

- name: ensure control is resolvable
  lineinfile:
    path: /etc/hosts
    line: 192.168.4.200  control.example.com  control

- name: install package
  yum:
    name: "{{ my_package }}"
    state: present

8. If you are using the package_facts module, you need to remember to update it after installing new packages. Add the following task to get this done:

- name: update package facts
  package_facts:
    manager: auto

9. As the last task, just because it’s fun, use the debug module together with the package facts to get information about the newly installed package:

- name: show package facts for {{ my_package }}
  debug:
    var: ansible_facts.packages[my_package]
  when: my_package in ansible_facts.packages

10. Use the command ansible-playbook exercise122-client.yaml -e my_package=redis. That’s right; this command overwrites the my_package variable that was set in the playbook—just to remind you a bit about variable precedence. :::

SeLinux File Properties

Managing SELinux Properties

  • SELinux can be used on files to manage file context
  • context can be set on ports
  • SELinux properties can be managed using Booleans.

Modules for Managing Changes on SELinux: file

  • Manages context on files but not in the SELinux Policy sefcontext
  • Manages file context in the SELinux policy command
  • Is required to run the restorecon command after using sefcontext selinux
  • Manages current SELinux state seboolean
  • Manages SELinux Booleans

Managing SELinux File Context

  • The context type that is set on the file defines which processes can work with the files.
  • The file context type can be set on a file directly, or it can be set on the SELinux policy.
  • All SELinux properties should be set in the SELinux policy.

sefcontext module.

  • Setting a context type in the policy doesn’t automatically apply it to files though.
  • You still need to run the Linux restorecon command to do this.
  • Ansible does not offer a module to run this command; it needs to be invoked using the command module.

file module

  • Can set SELinux context.
  • The context is set directly on the file, not in the SELinux policy.
  • As a result, if at any time default context is applied from the policy to the file system, all context that has been set with the Ansible file module risks being overwritten.

policycoreutils-python-utils RPM

  • Not installed by default in all installation patterns.
  • Needed to be able to work with the Ansible sefcontext module and the Linux restorecon command

Lab Managing SELinux Context with sefcontext

---
- name: show selinux
  hosts: all
  tasks:
  - name: install required packages
    yum:
      name: policycoreutils-python-utils
      state: present
  - name: create testfile
    file:
      name: /tmp/selinux
      state: touch
  - name: set selinux context
    sefcontext:
      target: /tmp/selinux
      setype: httpd_sys_content_t
      state: present
    notify:
      - run restorecon
  handlers:
    - name: run restorecon
      command: restorecon -v /tmp/selinux
  • You might just have to configure a service with a nondefault documentroot, which means that SELinux will deny access to the service.
  • You should ask yourself if this task requires any changes at an SELinux level.

Applying Generic SELinux Management Tasks

selinux module

  • enables you to set the current state of SELinux to either permissive, enforcing, or disabled.

seboolean module

  • enables you to easily enable or disable functionality in SELinux using Booleans.

Lab: Changing SELinux State and Booleans

    ---
    - name: enabling SELinux and a boolean
      hosts: ansible1
      vars:
        myboolean: httpd_read_user_content
      tasks:
      - name: enabling SELinux
        selinux:
          policy: targeted <--- must specify policy
          state: enforcing
      - name: checking current {{ myboolean }} Boolean status
        shell: getsebool -a | grep {{ myboolean }}
        register: bool_stat
      - name: showing boolean status
        debug:
          msg: the current {{ myboolean }} status is {{ bool_stat.stdout }}
      - name: enabling boolean
        seboolean:
          name: "{{ myboolean }}"
          state: yes
          persistent: yes

Lab: Changing SELinux Context

  • Install, start, and configure a web server that has the DocumentRoot set to the /web directory.
  • In this directory, create a file named index.html that shows the message “welcome to the webserver.”
  • Ensure that SELinux is enabled and allows access to the web server document root.
  • Also ensure that SELinux allows users to publish web pages from their home directory.

1. Start by creating a playbook outline. A good approach for doing this is to create the playbook play header and list all tasks that need to be accomplished by providing a name as well as the name of the task that you want to run. 2. Enable SELinux and set to the enforcing state. 3. Install the web server, start and enable it, create the /web directory, and create the index.html file in the /web directory. 4. Use the lineinfile module to change the httpd.conf contents. Two different lines need to be changed. 5. Configure the SELinux-specific settings. 6. Run the playbook and verify its output. 8. Verify that the web service is accessible by using curl http://ansible1. In this case, it should not work. Try to analyze why.

---
- name: Managing web server SELinux properties
  hosts: ansible1
  tasks:
  - name: ensure SELinux is enabled and enforcing
    selinux:
      policy: targeted
      state: enforcing
  - name: install the webserver
    yum:
      name: httpd
      state: latest
  - name: start and enable the webserver
    service:
      name: httpd
      state: started
      enabled: yes
  - name: open the firewall service
    firewalld:
      service: http
      state: enabled
      immediate: yes
  - name: create the /web directory
    file:
      name: /web
      state: directory
  - name: create the index.html file in /web
    copy:
      content: ’welcome to the exercise82 web server’
      dest: /web/index.html
  - name: use lineinfile to change webserver configuration
    lineinfile:
      path: /etc/httpd/conf/httpd.conf
      regexp: ’^DocumentRoot "/var/www/html"’
      line: DocumentRoot "/web"
    notify: restart httpd

  - name: use lineinfile to change webserver security
    lineinfile:
      path: /etc/httpd/conf/httpd.conf
      regexp: ’^<Directory "/var/www">’
      line: ’<Directory "/web">’
  - name: use sefcontext to set context on new documentroot
    sefcontext:
      target: ’/web(/.*)?’
      setype: httpd_sys_content_t
      state: present
  - name: run the restorecon command
    command: restorecon -Rv /web
  - name: allow the web server to run user content
    seboolean:
      name: httpd_read_user_content
      state: yes
      persistent: yes
      
  handlers:
    - name: restart httpd
      service:
        name: httpd
        state: restarted

Setting up an Ansible Lab

Requirements for Ansible

  • Python 3 on control node and managed nodes
  • sudo ssh access to managed nodes
  • Ansible installed on the Control node

Lab Setup

For this lab, we will need three virtual machines using RHEL 9. 1 control node and 2 managed nodes. Use IP addresses based on your lab network environment:

Hostname pretty hostname ip addreess RAM Storage vCPUs
control.example.com control 192.168.122.200 2048MB 20G 2
ansible1.example.com ansible1 192.168.122.201 2048MB 20G 2
ansible2.example.com ansible2 192.168.122.202 2048MB 20G 2
I have set these VMs up in virt-manager, then cloned them so I can rebuild the lab later. You can automate this using Vagrant or Ansible but that will come later. Ignore the Win10 VM. It’s a necessary evil:

Setting hostnames and verifying dependencies

Set a hostname on all three machines:

[root@localhost ~]# hostnamectl set-hostname control.example.com
[root@localhost ~]# hostnamectl set-hostname --pretty control

Install Ansible on Control Node

[root@localhost ~]# dnf -y install ansible-core
...

Verify python3 is installed:

[root@localhost ~]# python --version
Python 3.9.18

Configure Ansible user and SSH

Add a user for Ansible. This can be any username you like, but we will use “ansible” as our lab user. Also, the ansible user needs sudo access. We will also make it so no password is required for convenience. You will need to do this on the control node and both managed nodes:

[root@control ~]# useradd ansible
[root@control ~]# visudo

Add this line to the file that comes up: ansible ALL=(ALL) NOPASSWD: ALL

Configure a password for the ansible user:

[root@control ~]# passwd ansible
Changing password for user ansible.
New password: 
BAD PASSWORD: The password is shorter than 8 characters
Retype new password: 
passwd: all authentication tokens updated successfully.

On the control node only: Add host names of the nodes to /etc/hosts:

echo "192.168.124.201 ansible1 >> /etc/hosts
> ^C
[root@control ~]# echo "192.168.124.201 ansible1" >> /etc/hosts
[root@control ~]# echo "192.168.124.202 ansible2" >> /etc/hosts
[root@control ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.124.201 ansible1
192.168.124.202 ansible2

Log in to the ansible user account for the remaining steps. Note, Ansible assumes passwordless (key-based) login for ssh. If you insist on using passwords, add the –ask-pass (-k) flag to your Ansible commands. (This may require sshpass package to work)

On the control node only: Generate an ssh key to send to the hosts for passwordless Login:

[ansible@control ~]$ ssh-keygen -N "" -q
Enter file in which to save the key (/home/ansible/.ssh/id_rsa): 

Copy the public key to the nodes and test passwordless access and test passwordless login to the managed nodes:

^C[ansible@control ~]$ ssh-copy-id ansible@ansible1
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/ansible/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
ansible@ansible1's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'ansible@ansible1'"
and check to make sure that only the key(s) you wanted were added.

[ansible@control ~]$ ssh-copy-id ansible@ansible2
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/ansible/.ssh/id_rsa.pub"
The authenticity of host 'ansible2 (192.168.124.202)' can't be established.
ED25519 key fingerprint is SHA256:r47sLc/WzVA4W4ifKk6w1gTnxB3Iim8K2K0KB82X9yo.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
ansible@ansible2's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'ansible@ansible2'"
and check to make sure that only the key(s) you wanted were added.

[ansible@control ~]$ ssh ansible1
Register this system with Red Hat Insights: insights-client --register
Create an account or view all your systems at https://red.ht/insights-dashboard
Last failed login: Thu Apr  3 05:34:20 MST 2025 from 192.168.124.200 on ssh:notty
There was 1 failed login attempt since the last successful login.
[ansible@ansible1 ~]$ 
logout
Connection to ansible1 closed.
[ansible@control ~]$ ssh ansible2
Register this system with Red Hat Insights: insights-client --register
Create an account or view all your systems at https://red.ht/insights-dashboard
[ansible@ansible2 ~]$ 
logout
Connection to ansible2 closed.

Install lab stuff from the RHCE guide: sudo dnf -y install git

[ansible@control base]$ cd

[ansible@control ~]$ git clone https://github.com/sandervanvugt/rhce8-book 
Cloning into 'rhce8-book'...
remote: Enumerating objects: 283, done.
remote: Counting objects: 100% (283/283), done.
remote: Compressing objects: 100% (233/233), done.
remote: Total 283 (delta 27), reused 278 (delta 24), pack-reused 0 (from 0)
Receiving objects: 100% (283/283), 62.79 KiB | 357.00 KiB/s, done.
Resolving deltas: 100% (27/27), done.

SSH Connections

Managing SSH Connections

  • How to provide for SSH keys for new users in such a way that users are provided with SSH keys without having to set them up themselves.
  • To do this, you use the authorized_key module together with the generate_ssh_key argument to the user module.

Understanding SSH Connection Management Requirements

How SSH keys are used in the communication process between a user and an SSH server:

  1. The user initiates a session with an SSH server.
  2. The server sends back an identification token that is encrypted with the server private key to the user.
  3. The user uses the server’s public key fingerprint, which is stored in the ~/.ssh/known_hosts file to verify the identification token.
  4. If no public key fingerprint was stored yet in the ~/.ssh/known_hosts file, the user is prompted to store the remote server identity in the ~/.ssh/known_hosts file. At this point there is no good way to verify whether the user is indeed communicating with the intended server.
  5. After establishing the identity of the remote server, the user can either send over a password or generate an authentication token that is based on the user’s private key.
  6. If an authentication token that was based on the user’s private key is sent over, this token is received by the server, which tries to match it against the user’s public key that is stored in the ~/.ssh/authorized_keys file.
  7. After the incoming authentication token to the stored user public key in the authorized_keys file is matched, the user is authenticated. If this authentication fails and password authentication is allowed, password authentication is attempted next.

In the authentication procedure, two key pairs play an important role. First, there is the server’s public/private key pair, which is used to establish a secure connection. To manage the host public key, you can use the Ansible known_hosts module. Next, there is the user’s public/private key pair, which the user uses to authenticate. To manage the public key in this key pair, you can use the Ansible authorized_key module.

Lookup Plug-in

  • Enables Ansible to access data from outside sources.
  • Read the file system or contact external datastores and services.
  • Ran on the Ansible control host.
  • Results are usually stored in variables or templates.

Set the value of a variable to the contents of a file:

---
- name: simple demo with the lookup plugin
  hosts: localhost
  vars:
    file_contents: "{{lookup(‘file’, ‘/etc/hosts’)}}"
  tasks:
  - debug:
                var: file_contents

Setting Up SSH User Keys

  • To use SSH to connect to a user account on a managed host you can copy over the local user public key to the remote user ~/.ssh/authorized_keys file.
  • If the target authorized_keys file just has to contain one single key, you could use the copy module to copy it over.
  • To manage multiple keys in the remote user authorized_keys file, you’re better off using the authorized_key module.

authorized_key module

  • Copy the authorized_key for a user
  • /home/ansible/.ssh/id_rsa.pub is used as the source.
  • lookup plug-in is used to refer to the file contents that should be used:
---
- name: authorized_key simple demo
  hosts: ansible2
  tasks:
  - name: copy authorized key for ansible user
    authorized_key:
      user: ansible
      state: present
      key: "{{ lookup(‘file’, ‘/home/ansible/.ssh/id_rsa.pub’) }}"

Do the same for multiple users: vars/users

---
users:
  - username: linda
    groups: sales
  - username: lori
    groups: sales
  - username: lisa
    groups: account
  - username: lucy
    groups: account

vars/groups

---
usergroups:
  - groupname: sales
  - groupname: account
---
- name: configure users with SSH keys
  hosts: ansible2
  vars_files:
    - vars/users
    - vars/groups
  tasks:
  - name: add groups
    group:
      name: "{{ item.groupname }}"
    loop: "{{ usergroups }}"
  - name: add users
    user:
      name: "{{ item.username }}"
      groups: "{{ item.groups }}"
    loop: "{{ users }}"
  - name: add SSH public keys
    authorized_key:
      user: "{{ item.username }}"
      key: "{{ lookup(‘file’, ‘files/’+ item.username + ‘/id_rsa.pub’) }}"
    loop: "{{ users }}"
  • authorized_key module is set up to work on item.username, using a loop on the users variable.

  • The id_rsa.pub files that have to be copied over are expected to exist in the files directory, which exists in the current project directory.

  • Copying over the user public keys to the project directory is a solution because the authorized_keys module cannot read files from a hidden directory.

  • It would be much nicer to use key: “{{ lookup(‘file’, ‘/home/’+ item.username + ‘.ssh/id_rsa.pub’) }}”, but that doesn’t work.

  • In the first task you create a local user, including an SSH key.

  • Because an SSH key should include the name of the user and host that it applies to, you need to use the generate_ssh_key argument, as well as the ssh_key_comment argument to write the correct comment into the public key.

  • Without this content, the key will have generic content and not be considered a valid key.

- name: create the local user, including SSH key
  user:
    name: "{{ username }}"
    generate_ssh_key: true
    ssh_key_comment: "{{ username }}@{{ ansible_fqdn }}"
  • After creating the SSH keys this way, you aren’t able to fetch the key directly from the user home directory.
  • To fix that problem, you create a directory with the name of the user in the project directory and copy the user public key from there by using the shell module:
- name: create a directory to store the file
  file:
    name: "{{ username }}"
    state: directory
- name: copy the local user ssh key to temporary {{ username }} key
  shell: ‘cat /home/{{ username }}/.ssh/id_rsa.pub > {{ username }}/id_rsa.pub’
- name: verify that file exists
  command: ls -l {{ username }}/
  • Next, in the second play you create the remote user and use the authorized_key module to copy the key from the temporary directory to the new user home directory.

Exercise 13-2 Managing Users with SSH Keys Steps

  1. Create a user on localhost.
  2. Use the appropriate arguments to create the SSH public/private key pair according to the required format.
  3. Make sure the public key is copied to a directory where it can be accessed.
  4. Uses the user module to create the user, as well as the authorized_key module to fetch the key from localhost and copy it to the .ssh/authorized_keys file in the remote user home directory.
  5. Use the command ansible-playbook exercise132.yaml -e username=radha to create the user radha with the appropriate SSH keys.
  6. To verify it has worked, use sudo su - radha on the control host, and type the command ssh ansible1. You should able to log in without entering a password.
---
- name: prepare localhost
  hosts: localhost
  tasks:

  - name: create the local user, including SSH key
    user:
      name: "{{ username }}"
      generate_ssh_key: true
      ssh_key_comment: "{{ username }}@{{ ansible_fqdn }}"
    
  - name: create a directory to store the file
    file:
      name: "{{ username }}"
      state: directory
      
  - name: copy the local user ssh key to temporary {{ username }} key
    shell: ‘cat /home/{{ username }}/.ssh/id_rsa.pub > {{ username }}/id_rsa.pub’
  - name: verify that file exists
    command: ls -l {{ username }}/

- name: setup remote host
  hosts: ansible1
  tasks:
  - name: create remote user, no need for SSH key
    user:
      name: "{{ username }}"
      
  - name: use authorized_key to set the password
    authorized_key:
      user: "{{ username }}"
      key: "{{ lookup(‘file’, ‘./’+ username +’/id_rsa.pub’) }}"

Troubleshooting Common Scenarios

Troubleshooting Common Scenarios

Apart from the problems that may arise in playbooks, another type of error relates to connectivity issues. To connect to managed hosts, SSH must be configured correctly, and also authentication and privilege escalation must work as expected.

Analyzing Connectivity Issues

To be able to connect to a managed host, you need to have an IP network connection. Apart from that, you need to make sure that the host has been set up correctly:

• The SSH service needs to be accessible on the remote host.

• Python must be installed.

• Privilege escalation needs to be set up.

Apart from these, inventory settings may be specified to indicate how to connect to a remote host. Normally, the inventory contains a host name only. If a host resolves to multiple IP addresses, you may want to specify how exactly the remote host must be connected to. The ansible_host parameter can be configured to do so. In inventory, for instance, you may include the following line to ensure that your host is connected in the right way:

ansible5.example.com ansible_host=192.168.4.55

Notice that this setting makes sense only in an environment where a host can be reached on multiple different IP addresses.

To test connectivity to remote hosts, you can use the ping module. It checks for IP connectivity, accessibility of the SSH service, sudo privilege escalation, and the availability of a Python stack. The ping module does not take any arguments. Listing 11-18 shows the result of running on the ad hoc command ansible all -m ping where hosts that are available send “pong” as a reply, and for hosts that are not available, you see why they are not available.

Listing 11-18 Verifying Connectivity Using the ping Module

::: pre_1 [ansible@control rhce8-book]$ ansible all -m ping ansible2 | SUCCESS => { “ansible_facts”: { “discovered_interpreter_python”: “/usr/libexec/platform-python” }, “changed”: false, “ping”: “pong” } ansible1 | SUCCESS => { “ansible_facts”: { “discovered_interpreter_python”: “/usr/libexec/platform-python” }, “changed”: false, “ping”: “pong” } ansible3 | SUCCESS => { “ansible_facts”: { “discovered_interpreter_python”: “/usr/libexec/platform-python” }, “changed”: false, “ping”: “pong” } ansible4 | FAILED! => { “msg”: “Missing sudo password” } :::

Analyzing Authentication Issues

A few settings play a role in authentication on the remote host to execute tasks:

• The remote_user setting determines which user account to use on the managed nodes.

• SSH keys need to be configured for the remote_user to enable smooth authentication.

• The become parameter needs to be set to true.

• The become_user needs to be set to the root user account.

• Linux sudo needs to be set up correctly.

In Exercise 11-4 you work on troubleshooting some common scenarios.

::: box Exercise 11-4 Troubleshooting Connectivity Issues

1. Use an editor to create the file exercise114-1.yaml and give it the following contents:

---
- name: remove user from wheel group
  hosts: ansible4
  tasks:
  - user:
      name: ansible
      groups: ’’

2. Run the playbook using ansible-playbook exercise114-1.yaml and use ansible ansible4 -m reboot to reboot node ansible4.

3. Once the reboot is completed, use ansible all -m ping to verify connectivity. Host ansible4 should give a “Missing sudo password” error.

4. Type ansible ansible4 -m raw -a “usermod -aG wheel ansible” -u root -k to make user ansible a member of the group wheel again.

5. Repeat the ansible all -m ping command. You should now be able to connect normally to the host ansible4 again. :::

Users and Groups

Using Ansible Modules to Manage Users and Groups

  • management of the user and group accounts and their direct properties.
  • management of sudo privilege escalation
  • Setting up SSH connections and setting user passwords

Modules

user

  • manage users and their base properties

group

  • Manage groups and their properties

pamd

  • Manage advanced authentication configuration through linux pluggable authentication modules (PAM)

known_hosts

  • manage ssh known hosts

authorized_key

  • copy authorized key to a managed host

lineinfile

  • modify config file

Managing Users and Groups

    ---
    - name: creating a user and group
      hosts: ansible2
      tasks:
      - name: setup the group account
        group:
          name: students
          state: present
      - name: setup the user account
        user:
          name: anna
          create_home: yes
          groups: wheel,students
          append: yes
          generate_ssh_key: yes
          ssh_key_bits: 2048
          ssh_key_file: .ssh/id_rsa

group argument is

  • used to specify the primary group of the user.

groups argument is

  • used to make the user a member of additional groups.

  • While using the groups argument for existing users, make sure to include the append argument as well.

  • Without append, all current secondary group assignments are overwritten.

Also notice that the user module has some options that cannot normally be managed with the Linux useradd command. The module can also be used to generate an SSH key and specify its properties.

Managing sudo

No Ansible module specifically targets managing a sudo configuration

two options:

  1. You can use the template module to create a sudo configuration file in the directory /etc/sudoers.d.
    • Using such a file is recommended because the file is managed independently, and as such, there is no risk it will be overwritten by an RPM update.
  2. The alternative is to use the lineinfile module to manage the /etc/sudoers main configuration file directly.

Users are created and added to a sudo file that is generated from a template:

    [ansible@control rhce8-book]$ cat vars/sudo
    sudo_groups:
      - name: developers
        groupid: 5000
        sudo: false
      - name: admins
        groupid: 5001
        sudo: true
      - name: dbas
        groupid: 5002
        sudo: false
      - name: sales
        groupid: 5003
        sudo: true
      - name: account
        groupid: 5004
        sudo: false
    [ansible@control rhce8-book]$ cat vars/users
    users:
      - username: linda
        groups: sales
      - username: lori
        groups: sales
      - username: lisa
        groups: account
      - username: lucy
        groups: account
  • vars/users file defines users and the groups they should be a member of.
  • vars/sudo file defines new groups and, for each of these groups, sets a sudo parameter, which will be used in the template file:
{% for item in sudo_groups %}
{% if item.sudo %}
%{{ item.name}} ALL=(ALL:ALL) NOPASSWD:ALL
{% endif %}
{% endfor %}
  • a for loop is used to walk through all items that have been defined in the sudo_groups variable in the vars/sudo file.
  • for each of these groups an if statement is used to check the value of the Boolean variable sudo. If this variable is set to the Boolean value true, the group is added as a sudo group to the /etc/sudoers.d/sudogroups file.

Listing 13-4 Managing sudo

    ---
    - name: configure sudo
      hosts: ansible2
      vars_files:
        - vars/sudo
        - vars/users
      tasks:
      - name: add groups
        group:
          name: "{{ item.name }}"
        loop: "{{ sudo_groups }}"
      - name: add users
        user:
          name: "{{ item.username }}"
          groups: "{{ item.groups }}"
        loop: "{{ users }}"
      - name: allow group members in sudo
        template:
          src: listing133.j2
          dest: /etc/sudoers.d/sudogroups
          validate: ‘visudo -cf %s’
          mode: 0440

Using Ad Hoc commands in scripts

Ad hoc commands in Scripts

Follow normal bash scripting guidelines to run ansible commands in a script:

[ansible@control base]$ vim httpd-ansible.sh

Let’s set up a script that installs and starts/enables httpd, creates a user called “anna”, and copies the ansible control node’s /etc/hosts file to /tmp/ on the managed nodes:

#!/bin/bash

ansible all -m yum -a "name=httpd state=latest"
ansible all -m service -a "name=httpd state=started enabled=yes"
ansible all -m user -a "name=anna"
ansible all -m copy -a "src=/etc/hosts dest=/tmp/hosts"
[ansible@control base]$ chmod +x httpd-ansible.sh
[ansible@control base]$ ./httpd-ansible.sh 
web2 | UNREACHABLE! => {
    "changed": false,
    "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname web2: Name or service not known",
    "unreachable": true
}
web1 | UNREACHABLE! => {
    "changed": false,
    "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname web1: Name or service not known",
    "unreachable": true
}
ansible1 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "changed": false,
    "msg": "Nothing to do",
    "rc": 0,
    "results": []
}
ansible2 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "changed": false,
    "msg": "Nothing to do",
    "rc": 0,
    "results": []
}
... <-- Results truncated

And from the ansible1 node we can verify:

[ansible@ansible1 ~]$ cat /etc/passwd | grep anna
anna:x:1001:1001::/home/anna:/bin/bash
[ansible@ansible1 ~]$ cat /tmp/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.124.201 ansible1
192.168.124.202 ansible2

View a file from a managed node: ansible ansible1 -a "cat /somfile.txt"

Using Loops and Items

Using Loops and Items

  • Some modules enable you to provide a list that needs to be processed.
  • Many modules don’t, and in these cases, it makes sense to use a loop mechanism to iterate over a list of items.
  • Take, for instance, the yum module. While specifying the names of packages, you can use a list of packages.
  • If, however, you want to do something similar for the service module, you find out that this is not possible.
  • That is where loops come in.

Working with Loops

Install software packages using the yum module and then ensures that services installed from these packages are started using the service module:

    ---
    - name: install and start services
      hosts: ansible1
      tasks:
      - name: install packages
        yum:
          name:
          - vsftpd
          - httpd
          - samba
          state: latest
      - name: start the services
        service:
          name: "{{ item }}"
          state: started
          enabled: yes
        loop:
        - vsftpd
        - httpd
        - smb
  • A loop is defined at the same level as the service module.

  • The loop has a list of services in a list (array) statement

  • Items in the loop can be accessed by using the system internal variable item.

  • At no place in the playbook is there a definition of the variable item; the loop takes care of this.

  • When considering whether to use a loop, you should first investigate whether a module offers support for providing lists as values to the keys that are used.

  • If this is the case, just provide a list, as all items in the list can be considered in one run of the module.

  • If not, define the list using loop and provide "{{ item }}" as the value to the key.

  • When using loop, the module is activated again on each iteration.

Using Loops on Variables

  • Although it’s possible to define a loop within a task, it’s not the most elegant way.
  • To create a flexible environment where static code is separated from dynamic site-specific parameters, it’s a much better idea to define loops outside the static code, in variables.
  • When you define loops within a variable, all the normal rules for working with variables apply: The variables can be defined in the play header, using an include file, or as host/hostgroup variables.

Include the loop from a variable:

    ---
    - name: install and start services
      hosts: ansible1
      vars:
        services:
        - vsftpd
        - httpd
        - smb
      tasks:
      - name: install packages
        yum:
          name:
          - vsftpd
          - httpd
          - samba
          state: latest
      - name: start the services
        service:
          name: "{{ item }}"
          state: started
          enabled: yes
        loop: "{{ services }}"

Using Loops on Multivalued Variables

An item can be a simple list, but it can also be presented as a multivalued variable, as long as the multivalued variable is presented as a list.

Use variables that are imported from the file vars/users-list:

users:
  - username: linda
    homedir: /home/linda
    shell: /bin/bash
    groups: wheel
  - username: lisa
    homedir: /home/lisa
    shell: /bin/bash
    groups: users
  - username: anna
    homedir: /home/anna
    shell: /bin/bash
    groups: users

Use the list in a playbook:

    ---
    - name: create users using a loop from a list
      hosts: ansible1
      vars_files: vars/users-list
      tasks:
      - name: create users
        user:
          name: "{{ item.username }}"
          state: present
          groups: "{{ item.groups }}"
          shell: "{{ item.shell }}"
        loop: "{{ users }}"
  • Working with multivalued variables is possible, but the variables in that case must be presented as a list; using dictionaries is not supported.
  • The only way to loop over dictionaries is to use the dict2items filter.
  • Use of filters is not included in the RHCE topics and for that reason is not explained further here.
  • You can look up “Iterating over a dictionary” in the Ansible documentation for more information.

Understanding with_items

  • Since Ansible 2.5, using loop has been the command way to iterate over the values in a list.
  • In earlier versions of Ansible, the with_keyword statement was used instead.
  • In this approach, the keyword is replaced with the name of an Ansible look-up plug-in, but the rest of the syntax is very common.
  • Will be deprecated in a future version of Ansible.

With_keyword Options Overview with_items

  • Used like loop to iterate over values in a list with_file
  • Used to iterate over a list of filenames on the control node with_sequence
  • Used to generate a list of values based on a numeric sequence

Loop over a list using with_keyword:

    ---
    - name: install and start services
      hosts: ansible1
      vars:
        services:
        - vsftpd
        - httpd
        - smb
      tasks:
      - name: install packages
        yum:
          name:
          - vsftpd
          - httpd
          - samba
          state: latest
      - name: start the services
        service:
          name: "{{ item }}"
          state: started
          enabled: yes
        with_items: "{{ services }}"

Lab: Working with loop

1. Use your editor to define a variables file with the name vars/packages and the following contents:

packages:
- name: httpd
  state: absent
- name: vsftpd
  state: installed
- name: mysql-server
  state: latest

2. Use your editor to define a playbook with the name exercise71.yaml and create the play header:

- name: manage packages using a loop from a list
  hosts: ansible1
  vars_files: vars/packages
  tasks:

3. Continue the playbook by adding the yum task that will manage the packages, using the packages variable as defined in the vars/packages variable include file:

- name: manage packages using a loop from a list
  hosts: ansible1
  vars_files: vars/packages
  tasks:
  - name: install packages
    yum:
      name: "{{ item.name }}"
      state: "{{ item.state }}"
    loop: "{{ packages }}"

4. Run the playbook using ansible-playbook exercise71.yaml, and observe the results. In the results you should see which packages it is trying to manage and in which state it is trying to get the packages.

Using Modules for Troubleshooting and Testing

Using Modules for Troubleshooting and Testing

While working with playbooks, you may use different modules for troubleshooting. The debug module was used in previous chapters and is particularly useful for analyzing variable behavior. Some other modules may prove useful when troubleshooting Ansible. Table 11-4 gives an overview.

::: group Table 11-4 Troubleshooting Modules Overview

Image Image {width=“940” height=“295”} :::

The following sections discuss how these modules can be used.

Using the Debug Module

The debug module is useful to visualize what is happening at a certain point in a playbook. It works with two arguments: the msg argument can be used to print a message, and the var argument can be used to print the value of a variable. Notice that when you use the var argument, the variable does not have to be referred to using the usual {{ varname }} structure, just use varname instead. If variables are used in the msg argument, they must be referred to the normal way, using the {{ varname }} syntax.

Because you have already seen the debug module in action in numerous examples in Chapters 6, 7, and 8 of this book, no new examples are included here.

Using the uri Module

The best way to learn how to work with these modules is to look at some examples. Listing 11-7 shows an example where the uri module is used.

Listing 11-7 Using the uri Module

::: pre_1 — - name: test webserver access hosts: localhost become: no tasks: - name: connect to the web server uri: url: http://ansible2.example.com return_content: yes register: this failed_when: “’welcome’ not in this.content” - debug: var: this.content :::

The playbook in Listing 11-7 uses the uri module to connect to a web server. The return_content argument captures the web server content, which is stored in a variable using register. Next, the failed_when statement makes this module fail if the text “welcome” is not in the registered variable. For debugging purposes, the debug module is used to show the contents of the variable.

In Listing 11-8 you can see the partial result of running this playbook. Notice that the playbook does not generate a failure because the default web page that is shown by the Apache web server contains the text “welcome.”

Listing 11-8 ansible-playbook listing117.yaml Command Result

[ansible@control rhce8-book]$ ansible-playbook listing117.yaml
    
    PLAY [test webserver access] ***************************************************
    
    TASK [Gathering Facts] *********************************************************
    ok: [localhost]
    
    TASK [connect to the web server] ***********************************************
    ok: [localhost]
    
    TASK [debug] *******************************************************************
    ok: [localhost] => {
        "this.content": "
    
    PLAY RECAP *********************************************************************
    localhost                  : ok=3    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
    

Using the uri module can be useful to perform a simple test to check whether a web server is available, but you can also use it to check accessibility or returned information from an API endpoint.

Using the stat Module

You can use the stat module to check on the status of files. Although this module can be useful for checking on the status of just a few files, it’s not a file system integrity checker that was developed to check file status on a large scale. If you need large-scale file system integrity checking, you should use Linux utilities such as aide.

The stat module is useful in combination with register. In this use, the stat module is used to register the status of a specific file, and in a when statement, a check can be done to see whether the file status is not as expected. In combination with the fail module, you can use this module to generate a failure and error message if the file does not meet the expected status. Listing 11-9 shows an example, and Listing 11-10 shows the resulting output, where you can see that the fail module fails the playbook because the file owner is not root.

Listing 11-9 Using stat to Check Expected File Status

::: pre_1 — - name: create a file hosts: all tasks: - file: path: /tmp/statfile state: touch owner: ansible

- name: check file status
  hosts: all
  tasks:
  - stat:
      path: /tmp/statfile
    register: stat_out
  - fail:
      msg: "/tmp/statfile file owner not as expected"
    when: stat_out.stat.pw_name != ’root’

:::

Listing 11-10 ansible-playbook listing119.yaml Command Result

::: pre_1 [ansible@control rhce8-book]$ ansible-playbook listing119.yaml

PLAY [create a file] ***********************************************************

TASK [Gathering Facts] *********************************************************
ok: [ansible2]
ok: [ansible1]
ok: [ansible3]
ok: [ansible4]
fatal: [ansible6]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ansible@ansible6: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).", "unreachable": true}
fatal: [ansible5]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: connect to host ansible5 port 22: No route to host", "unreachable": true}

TASK [file] ********************************************************************
changed: [ansible2]
changed: [ansible1]
changed: [ansible3]
changed: [ansible4]

PLAY [check file status] *******************************************************

TASK [Gathering Facts] *********************************************************
ok: [ansible1]
ok: [ansible2]
ok: [ansible3]
ok: [ansible4]

TASK [stat] ********************************************************************
ok: [ansible2]
ok: [ansible1]
ok: [ansible3]
ok: [ansible4]

TASK [fail] ********************************************************************
fatal: [ansible2]: FAILED! => {"changed": false, "msg": "/tmp/statfile file owner not as expected"}
fatal: [ansible1]: FAILED! => {"changed": false, "msg": "/tmp/statfile file owner not as expected"}
fatal: [ansible3]: FAILED! => {"changed": false, "msg": "/tmp/statfile file owner not as expected"}
fatal: [ansible4]: FAILED! => {"changed": false, "msg": "/tmp/statfile file owner not as expected"}

PLAY RECAP *********************************************************************
ansible1                   : ok=4    changed=1    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0
ansible2                   : ok=4    changed=1    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0
ansible3                   : ok=4    changed=1    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0
ansible4                   : ok=4    changed=1    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0
ansible5                   : ok=0    changed=0    unreachable=1    failed=0    skipped=0    rescued=0    ignored=0
ansible6                   : ok=0    changed=0    unreachable=1    failed=0    skipped=0    rescued=0    ignored=0

:::

Using the assert Module

The assert module is a bit like the fail module. You can use it to perform a specific conditional action. The assert module works with a that option that defines a list of conditionals. If any one of these conditionals is false, the task fails, and if all the conditionals are true, the task is successful. Based on the success or failure of a task, the module uses the success_msg or fail_msg options to print a message. Listing 11-11 shows an example that uses the assert module.

Listing 11-11 Using the assert Module

::: pre_1 — - hosts: localhost vars_prompt: - name: filesize prompt: “specify a file size in megabytes” tasks: - name: check if file size is valid assert: that: - “{{ (filesize | int) <= 100 }}” - “{{ (filesize | int) >= 1 }}” fail_msg: “file size must be between 0 and 100” success_msg: “file size is good, let\’s continue” - name: create a file command: dd if=/dev/zero of=/bigfile bs=1 count={{ filesize }} :::

The example in Listing 11-11 contains a few new items. As you can see, the play header starts with a vars_prompt. This defines a variable named filesize, which is based on the input provided by the user. This filesize variable is next used by the assert module. The that statement contains a list in which two conditions are stated. If specified like this, all conditions stated in the that condition must be true. So you are looking for filesize to be equal to or bigger than 1, and smaller than or equal to 100.

Before this can be done, one little problem needs to be managed: when vars_prompt is used, the variable type is set to be a string by default. This means that a statement like

**filesize left caret= 100**

would fail with a type mismatch. That is why a Jinja2 filter is used to convert the variable type from string to integer.

Filters are a powerful feature provided by the Jinja2 templating language and can be used in Ansible to modify variables before processing. For more information about filters, see https://docs.ansible.com/ansible/latest/user_guide/playbooks_filters.html. The int filter can be used to convert the value of a string variable to an integer. To do this, you need to rewrite the entire variable as a Jinja2 operation, which is done using "{{ (filesize | int) left caret= 100 }}".

In this line, the entire string is written as a variable. The variable is further treated in a Jinja2 context. In this context, the part (filesize | int) ensures that the string is converted to an integer, which makes it possible to check if the value is smaller than 100.

When you run the code in Listing 11-11, the result shown in Listing 11-12 is produced.

Listing 11-12 ansible-playbook listing1111.yaml Output

::: pre_1 [ansible@control rhce8-book]$ ansible-playbook listing1111.yaml

PLAY [localhost] *****************************************************************

TASK [Gathering Facts] ***********************************************************
ok: [localhost]

TASK [check if file size is valid] ***********************************************
fatal: [localhost]: FAILED! => {
    "assertion": "filesize left caret= 100",
    "changed": false,
    "evaluated_to": false,
    "msg": "file size must be between 0 and 100"
}

PLAY RECAP ***********************************************************************
localhost                  : ok=1    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0

:::

As you can see, the task that is defined with the assert module fails because the variable has a value that is not between the minimum and maximum sizes that are defined.

Understanding the need for using the filter to convert the variable type might not be easy. So, let’s also look at Listing 11-13, which shows an example of a playbook that will fail. You can see its behavior in Listing 11-14, where the playbook is executed.

Listing 11-13 Failing Version of the Listing 11-11 Playbook

::: pre_1 — - hosts: localhost vars_prompt: - name: filesize prompt: “specify a file size in megabytes” tasks: - name: check if file size is valid assert: that: - filesize <= 100 - filesize >= 1 fail_msg: “file size must be between 0 and 100” success_msg: “file size is good, let\’s continue” - name: create a file command: dd if=/dev/zero of=/bigfile bs=1 count={{ filesize }} :::

Listing 11-14 ansible-playbook listing1113.yaml Failing Result

::: pre_1 [ansible@control rhce8-book]$ ansible-playbook listing1113.yaml specify a file size in megabytes:

PLAY [localhost] *****************************************************************

TASK [Gathering Facts] ***********************************************************
ok: [localhost]

TASK [check if file size is valid] ***********************************************
fatal: [localhost]: FAILED! => {"msg": "The conditional check ’filesize left caret= 100’ failed. The error was: Unexpected templating type error occurred on ({% if filesize left caret= 100 %} True {% else %} False {% endif %}): ’left caret=’ not supported between instances of ’str’ and ’int’"}

PLAY RECAP ***********************************************************************
localhost                  : ok=1    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0

:::

As you can see, the code in Listing 11-13 fails because the \left caret= test is not supported between a string and an integer.

In Exercise 11-2 you work with some of the modules discussed in this section.

::: box Exercise 11-2 Using Modules for Troubleshooting

1. Open your editor to create the file exercise112.yaml and define the play header:

---
- name: using assert to check if volume group vgdata exists
  hosts: all
  tasks:

2. Add a task that uses the command vgs vgdata to check whether a volume group with the name vgdata exists. The task should use register to register the command result, and it should continue if this is not the case.

- name: check if vgdata exists
  command: vgs vgdata
  register: vg_result
  ignore_errors: true

3. To make it easier to use assert in the next step on the right variable, include a debug task to show the value of the variable:

- name: show vg_result variable
  debug:
    var: vg_result

4. Add a task to print a success or failure message, depending on the result of the vgs command from the first task:

- name: print a message
  assert:
    that:
    - vg_result.rc == 0
    fail_msg: volume group not found
    success_msg: volume group was found

5. Use the command ansible-playbook exercise112.yaml to verify its contents. Assuming that the LVM Volume Group vgdata was not found, it should print “volume group not found.”

6. Change the playbook to verify that it will print the success_msg if the requested volume group was found. You can do so by having it run the command vgs cl, which on CentOS 8 should give a positive result. :::

Using Multiple Inventories

Working with Multiple Inventory Files

  • Ansible supports working with multiple inventory files.
  • One way of using multiple inventory files is to enter multiple -i parameters with the ansible or ansible-playbook commands to specify the name of the files to be used.
  • ansible-inventory -i inventory -i listing101.py --list
    • Would produce an output list based on the static inventory in the inventory file, as well as the dynamic inventory that is generated by the listing101.py Python script.
  • You can also specify the name of a directory using the -i option.
    • Uses all files in the directory as inventory files.
    • When using an inventory directory, dynamic inventory files still must be executable for this approach to work.

Lab: Using Multiple Inventories

  • Open a shell as the ansible user and create a directory with the name inventories.
  • Copy the file listing101.py to the directory inventories.
  • Also copy the inventory file to the directory inventories.
  • To make sure both inventories have some unique contents, add the following lines to the file inventories/inventory:
webserver1
webserver2
  • Add the following lines to the Linux /etc/hosts file:
192.168.4.203    ansible3.example.com    ansible3
192.168.4.204    ansible4.example.com    ansible4
  • Use the command ansible-inventory -i inventories --list.

Using RHEL System roles

Using RHEL System Roles

  • Allows for a uniform approach while managing multiple RHEL versions
  • Red Hat provides RHEL System Roles.
  • RHEL System Roles make managing different parts of the operating system easy.

RHEL System Roles:

rhel-system-roles.kdump

  • Configures the kdump crash recovery service rhel system-roles.network
  • Configures network interfaces rhel system-roles.postfix
  • Configures hosts as a Mail Transfer Agent using Postfix rhel system-roles.selinux
  • Manages SELinux settings rhel system-roles.storage
  • Configures storage rhel system-roles.timesync
  • Configures time synchronization

Understanding RHEL System Roles

  • RHEL System Roles are based on the community Linux System Roles
  • Provide a uniform interface to make configuration tasks easier where significant differences may exist between versions of the managed operating system.
  • RHEL System Roles can be used to manage Red Hat Enterprise Linux 6.10 and later, as well as RHEL 7.4 and later, and all versions of RHEL 8.
  • Linux System Roles are not supported by RHEL technical support.

Installing RHEL System Roles

  • To use RHEL System Roles, you need to install the rhel-system-roles package on the control node by using sudo yum install rhel-system-roles.

  • This package can be found in the RHEL 8 AppStream repository.

  • After installation, the roles are copied to the /usr/share/ansible/roles directory, a directory that is a default part of the Ansible roles_path setting.

  • If a modification to the roles_path setting has been made in ansible.cfg, the roles are applied to the first directory listed in the roles_path.

  • With the roles, some very useful documentation is installed also; you can find it in the /usr/share/doc/rhel-system-roles directory.

  • To pass configuration to the RHEL System Roles, variables are important.

  • In the documentation directory, you can find information about variables that are required and used by the role.

  • Some roles also contain a sample playbook that can be used as a blueprint when defining your own role.

  • It’s a good idea to use these as the basis for your own RHEL System Roles–based configuration.

  • The next two sections describe the SELinux and the TimeSync System Roles, which provide nice and easy-to-implement examples of how you can use the RHEL System Roles.

Using the RHEL SELinux System Role

  • You learned earlier how to manage SELinux settings using task definitions in your own playbooks.

  • Using the RHEL SELinux System Role provides an easy-to-use alternative.

  • To use this role, start by looking at the documentation, which is in the /usr/share/doc/rhel-system-roles/selinux directory.

  • A good file to start with is the README.md file, which provides lists of all the ingredients that can be used.

  • The SELinux System Role also comes with a sample playbook file.

  • The most important part of this file is the vars: section, which defines the variables that should be applied by SELinux.

Variable Definition in the SELinux System Role:

    ---
    - hosts: all
      become: true
      become_method: sudo
      become_user: root
      vars:
        selinux_policy: targeted
        selinux_state: enforcing
        selinux_booleans:
          - { name: ’samba_enable_home_dirs’, state: ’on’ }
          - { name: ’ssh_sysadm_login’, state: ’on’, persistent: ’yes’ }
        selinux_fcontexts:
          - { target: ’/tmp/test_dir(/.*)?’, setype: ’user_home_dir_t’, ftype: ’d’ }
        selinux_restore_dirs:
          - /tmp/test_dir
        selinux_ports:
          - { ports: ’22100’, proto: ’tcp’, setype: ’ssh_port_t’, state: ’present’ }
        selinux_logins:
          - { login: ’sar-user’, seuser: ’staff_u’, serange: ’s0-s0:c0.c1023’, state: ’present’ }

SELinux Variables Overview

selinux_policy

  • Policy to use, usually set to targeted selinux_state

  • SELinux state, as managed with setenforce selinux_booleans

  • List of Booleans that need to be set selinux_fcontext

  • List of file contexts that need to be set, including the target file or directory to which they should be applied. selinux_restore_dir

  • List of directories at which the Linux restorecon command needs to be executed to apply new context. selinux_ports

  • List of ports and SELinux port types selinux_logins

  • A list of SELinux user and roles that can be created

  • Most of the time while configuring SELinux, you need to apply the correct state as well as file context.

  • To set the appropriate file context, you first need to define the selinux_fcontext variable.

  • Next, you have to define selinux_restore_dirs also to ensure that the desired context is applied correctly.

Lab: Sets the httpd_sys_content_t context type to the /web directory.

  • Sample doc is used and unnecessary lines are removed and the values of two variables have been set
  • When you use the RHEL SELinux System Role, some changes require the managed host to be rebooted.
  • To take care of this, a block structure is used, where the System Role runs in the block.
  • When a change that requires a reboot is applied, the SELinux System Role sets the variable selinux_reboot_required and fails.
  • As a result, the rescue section in the playbook is executed.
  • This rescue section first makes sure that the playbook fails because of the selinux_reboot_required variable being set to true.
  • If that is the case, the reboot module is called to reboot the managed host.
  • While rebooting, playbook execution waits for the rebooted host to reappear, and when that happens, the RHEL SELinux System Role is called again to complete its work.
---
- hosts: ansible2
  vars:
    selinux_policy: targeted
    selinux_state: enforcing
    selinux_fcontexts:
      - { target: ’/web(/.*)?’, setype: ’httpd_sys_content_t’, ftype: ’d’ }
    selinux_restore_dirs:
      - /web
    
# prepare prerequisites which are used in this playbook
  tasks:
    - name: Creates directory
        file:
        path: /web
        state: directory
    - name: execute the role and catch errors
        block:
        - include_role:
            name: rhel-system-roles.selinux
        rescue:
            # Fail if failed for a different reason than selinux_reboot_required.
            - name: handle errors
              fail:
                msg: "role failed"
              when: not selinux_reboot_required
    
            - name: restart managed host
              shell: sleep 2 && shutdown -r now "Ansible updates triggered"
              async: 1
              poll: 0
              ignore_errors: true
    
            - name: wait for managed host to come back
              wait_for_connection:
                delay: 10
                timeout: 300
    
            - name: reapply the role
              include_role:
                name: rhel-system-roles.selinux

Using the RHEL TimeSync System Role

timesync_ntp_servers variable

  • most important setting

  • specifies attributes to indicate which time servers should be used.

  • The hostname attribute identifies the name of IP address of the time server.

  • The iburst option is used to enable or disable fast initial time synchronization using the timesync_ntp_servers variable.

  • The System Role finds out which version of RHEL is used, and according to the currently used version, it either configures NTP or Chronyd.

Lab: Using an RHEL System Role to Manage Time Synchronization

1. Copy the sample timesync playbook to the current directory: cp /usr/share/doc/rhel-system-roles/timesync/example-single-pool-playbook.yml timesync.yaml

2. Add the target host, NTP hostname pool.ntp.org, and remove pool true in the file timesync.yaml:

---
- name: Configure NTP
  hosts: "{{ host }}"
  vars:
    timesync_ntp_servers:
      - hostname: pool.ntp.org
        iburst: true
  roles:
    - rhel-system-roles.timesync

3. Add the timezone module and the timezone variable to the playbook to set the timezone as well. The complete playbook should look like the following:

---
- hosts: ansible2
  vars:
    timesync_ntp_servers:
    - hostname: pool.ntp.org
      iburst: yes
    timezone: UTC
  roles:
  - rhel-system-roles.timesync
  tasks:
  - name: set timezone
    timezone:
      name: "{{ timezone }}"

4. Use ansible-playbook timesync.yaml to run the playbook. Observe its output. Notice that some messages in red are shown, but these can safely be ignored.

5. Use ansible ansible2 -a "timedatectl show" and notice that the timezone variable is set to UTC.

Using Tags

Using Tags

When you are using larger playbooks, Ansible enables you to use the tags attribute. A tag is a label that is applied to a task or another item like a block or a play, and while using the ansible-playbook --tags or ansible-playbook --skip-tags command, you can specify which tags need to be executed. Listing 11-15 shows a simple playbook example where tags are used, and in Listing 11-16 you can see the output generated while running this playbook.

Listing 11-15 Using tags in a Playbook

::: pre_1 — - name: using tags example hosts: all vars: service: - vsftpd - httpd tasks: - yum: name: - httpd - vsftpd state: present tags: - install - service: name: “{{ item }}” state: started enabled: yes loop: “{{ services }}” tags: - services :::

Listing 11-16 ansible-playbook --tags “install” listing1115.yaml Output

::: pre_1 [ansible@control rhce8-book]$ ansible-playbook –tags “install” listing1115.yaml

PLAY [using tags example] ******************************************************

TASK [Gathering Facts] *********************************************************
ok: [ansible2]
ok: [ansible1]
ok: [ansible4]
ok: [ansible3]

TASK [yum] *********************************************************************
ok: [ansible2]
ok: [ansible1]
changed: [ansible3]
changed: [ansible4]

PLAY RECAP *********************************************************************
ansible1                   : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
ansible2                   : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
ansible3                   : ok=2    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
ansible4                   : ok=2    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

:::

Tags can be applied to many structures, such as imported plays, tasks, and roles, but the easiest way to get familiar with tags is to use them on a task. Note that tags cannot be applied on items that are dynamically included (instead of imported), using include_roles or include_tasks.

While writing playbooks, you may apply the same tag multiple times. This capability allows you to define groups of tasks, where multiple tasks are configured with the same tag, and as a result, you can easily run a specific part of the requested configuration. When multiple tasks with multiple tags are used, you can get an overview of each using the ansible-playbook --list-tasks --list-tags command. In Listing 11-17 you can see an example that is based on the playbook listing1114.yaml.

Listing 11-17 Listing Tasks and Tags

::: pre_1 [ansible@control rhce8-book]$ ansible-playbook –list-tags –list-tasks listing1115.yaml

playbook: listing1115.yaml

  play #1 (all): using tags example.    TAGS: []
    tasks:
      yum.       TAGS: [install]
      service.   TAGS: [services]
      TASK TAGS: [install, services]

:::

When working with tags, you can use some special tags. Table 11-5 gives an overview.

Table 11-5 Special Tags Overview

Apart from these special tags, you might also want to set a debug tag to easily identify tasks that should be run only if you specifically want to run debug tasks as well. If combined with the never tag, the task that is tagged with the debug,never tasks runs only if the debug tag is specifically requested. So in case you want to run the entire playbook, including tasks that have been tagged with debug, you need to use the ansible-playbook --tags all,debug command. In Exercise 11-3 you can see how this can be used to optimize the playbook that was previously used in Exercise 11-2.

::: box Exercise 11-3 Using Tags to Make Debugging Easier

1. Rewrite the exercise112.yaml playbook that you created in the previous exercise, and include the line tags: [never, debug ] in the debug task. The complete playbook looks as follows:

---
- name: using assert to check if volume group vgdata exists
  hosts: all
  tasks:
  - name: check if vgdata exists
    command: vgs vgdata
    register: vg_result
    ignore_errors: true
  - name: show vg_result variable
    debug:
      var: vg_result
    tags: [ never, debug ]
  - name: print a message
    assert:
      that:
      - vg_result.rc == 0
      fail_msg: volume group not found
      success_msg: volume group was found

2. Run the playbook using ansible-playbook --tags all exercise113.yaml. Notice that it does not run the debug task.

3. Run the playbook using ansible-playbook --tags all,debug exercise113.yaml. Notice that it now does run the debug task as well. :::

Using when to Run Tasks Conditionally

Using when to Run Tasks Conditionally

  • Use a when statement to run tasks conditionally.
  • you can test whether:
    • a variable has a specific value
    • whether a file exists
    • whether a minimal amount of memory is available
    • etc.

Working with when

Install the right software package for the Apache web server, based on the Linux distribution that was found in the Ansible facts. Notice that

  • when used in when statements, the variable that is evaluated is not placed between double curly braces.
    ---
    - name: conditional install
      hosts: all
      tasks:
      - name: install apache on Red Hat and family
        yum:
          name: httpd
          state: latest
        when: ansible_facts[’os_family’] == "RedHat"
      - name: install apache on Ubuntu and family
        apt:
          name: apache2
          state: latest
        when: ansible_facts[’os_family’] == "Debian"
  • not a part of any properties of the modules on which it is used

  • must be indented at the same level as the module itself.

  • For a string test, the string itself must be between double quotes.

  • Without the double quotes, it would be considered an integer test.

Using Conditional Test Statements

Common conditional tests that you can perform with the when statement:

Variable exists

  • variable is defined Variable does not exist

  • variable is not defined First variable is present in list mentioned as second

  • ansible_distribution in distributions Variable is true, 1 or yes

  • variable Variable is false, 0 or no

  • not variable Equal (string)

  • key == “value” Equal (numeric)

  • key == value Less than

  • key < value Less than or equal to

  • key <= value Greater than

  • key > value Greater than or equal to

  • key >= value Not equal to

  • key != value

  • Look for “Tests” in the Ansible documentation, and use the item that is found in Templating (Jinja2).

  • When referring to variables in when statements, you don’t have to use curly brackets because items in a when statement are considered to be variables by default.

  • So you can write when: text == “hello” instead of when: “{{ text }}” == “hello”.

There are roughly four types of when conditional tests: • Checks related to variable existence • Boolean checks • String comparisons • Integer comparisons

The first type of test checks whether a variable exists or is a part of another variable, such as a list.

Checks for the existence of a specific disk device, using variable is defined and variable is not defined. All failing tests result in the message “skipping.”

    ---
    - name: check for existence of devices
      hosts: all
      tasks:
      - name: check if /dev/sda exists
        debug:
          msg: a disk device /dev/sda exists
        when: ansible_facts[’devices’][’sda’] is defined
      - name: check if /dev/sdb exists
        debug:
          msg: a disk device /dev/sdb exists
        when: ansible_facts[’devices’][’sdb’] is defined
      - name: dummy test, intended to fail
        debug:
          msg: failing
        when: dummy is defined
      - name: check if /dev/sdc does not exist
        debug:
          msg: there is no /dev/sdc device
        when: ansible_facts[’devices’][’sdc’] is not defined

Lab: Check that finds whether the first variable value is present in the second variable’s list.

  • executes the debug task if the variable my_answer is in supported_packages.
  • vars_prompt is used. This stops the playbook, asks the user for input, and stores the input in a variable with the name my_answer.
    ---
    - name: test if variable is in another variables list
      hosts: all
      vars_prompt:
      - name: my_answer
        prompt: which package do you want to install
      vars:
        supported_packages:
        - httpd
        - nginx
      tasks:
      - name: something
        debug:
          msg: you are trying to install a supported package
        when: my_answer in supported_packages

Boolean check

  • Works on variables that have a Boolean value (not very common) T
  • Should not be defined with the check for existence.
  • Used to check whether a variable is defined.

string comparisons and integer comparisons

  • Ie: Check if more than 1 GB of disk space is available.
  • When doing checks on available disk space and available memory, carefully look at the expected value.
  • Memory is shown in megabytes, by default, whereas disk space is expressed in bytes.

Lab: integer check, install vsftpd if more than 50 MB of memory is available.

    ---
    - name: conditionals test
      hosts: all
      tasks:
      - name: install vsftpd if sufficient memory available
        package:
          name: vsftpd
          state: latest
        when: ansible_facts[’memory_mb’][’real’][’free’] > 50

Testing Multiple Conditions

  • when statements can also be used to evaluate multiple conditions.
  • To do so, you can group the conditions with parentheses and combine them with and and or keywords.
  • and runs if both conditionals are ture
  • or runs if one of the conditions are true

Lab: and is used and runs the task only if both conditions are true.

    ---
    - name: testing multiple conditions
      hosts: all
      tasks:
      - name: showing output
        debug:
          msg: using CentOS 8.1
        when: ansible_facts[’distribution_version’] == "8.1" and ansible_facts[’distribution’] == "CentOS"
  • You can make more complex statements by grouping conditions together in parentheses.
  • group the when statement starts with a > sign to wrap the statement over the next five lines for readability.

Lab: Combining complex statements

    ---
    - name: using multiple conditions
      hosts: all
      tasks:
      - package:
          name: httpd
          state: removed
        when: >
          ( ansible_facts[’distribution’] == "RedHat" and
            ansible_facts[’memfree_mb’] < 512 )
          or
          ( ansible_facts[’distribution’] == "CentOS" and
            ansible_facts[’memfree_mb’] < 256 )

Combining loop and when

Lab: Combining loop and when, Perform a kernel update only if /boot is on a dedicated mount point and at least 200 MBis available in the mount.

    ---
    - name: conditionals test
      hosts: all
      tasks:
      - name: update the kernel if sufficient space is available in /boot
        package:
          name: kernel
          state: latest
        loop: "{{ ansible_facts[’mounts’] }}"
        when: item.mount == "/boot" and item.size_available > 200000000

Combining loop and register

Lab: Combining register and loop

    ---
    - name: test register
      hosts: all
      tasks:
        - shell: cat /etc/passwd
          register: passwd_contents
        - debug:
            msg: passwd contains user lisa
          when: passwd_contents.stdout.find(’lisa’) != -1

passwd_contents.stdout.find,

  • passwd_contents.stdout does not contain any item with the name find.
  • Construction that is used here is variable.find, which enables a task to search a specific string in a variable. (thefind function in Python is used)
  • When the Python find function does not find a string, it returns a value of −1.
  • If the requested string is found, the find function returns an integer that returns the position where the string was found.
  • For instance, if the string lisa is found in /etc/passwd, it returns an unexpected value like 2604, which is the position in the file, expressed as a byte offset from the beginning, where the string is found for the first time.
  • Because of the behavior of the Python find function, variable.find needs not to be equal to −1 to have the task succeed. So don’t write passwd_contents.stdout.find(’lisa’) = 0 (because it is not a Boolean), but instead write passwd_contents.stdout.find(’lisa’) != -1.

Lab: Practice working with conditionals using register.

  • When using register, you might want to define a task that runs a command that will fail, just to capture the return code of that command, after which the playbook should continue. If that is the case, you must ensure that ignore_errors: yes is used in the task definition.

1. Use your editor to create a new file with the name exercise72.yaml. Start writing the play header as follows:

---
- name: restart sshd service if httpd is running
  hosts: ansible1
  tasks:

2. Add the first task, which checks whether the httpd service is running, using command output that will be registered. Notice the use of ignore_errors: yes. This line makes sure that if the service is not running, the play is still executed further.

---
- name: restart sshd service if httpd is running
  hosts: ansible1
  tasks:
  - name: get httpd service status
    command: systemctl is-active httpd
    ignore_errors: yes
    register: result

3. Add a debug task that shows the output of the command so that you can analyze what is currently in the registered variable:

---
- name: restart sshd service if httpd is running
  hosts: ansible1
  tasks:
  - name: get httpd service status
    command: systemctl is-active httpd
    ignore_errors: yes
    register: result
  - name: show result variable contents
    debug:
      msg: printing contents of the registered variable {{ result }}

4. Complete the playbook by including the service task, which is started only if the value stored in result.rc (which is the return code of the command that was registered) contains a 0. This is the case if the previous command executed successfully.

---
- name: restart sshd service if httpd is running
  hosts: ansible1
  tasks:
  - name: get httpd service status
    command: systemctl is-active httpd
    ignore_errors: yes
    register: result
  - name: show result variable contents
    debug:
      msg: printing contents of the registered variable {{ result }}
  - name: restart sshd service
    service:
      name: sshd
      state: restarted
    when: result.rc == 0

5. Use an ad hoc command to make sure the httpd service is installed: ansible ansible1 -m yum -a "name=httpd state=latest".

6. Use an ad hoc command to make sure the httpd service is stopped: ansible ansible1 -m service -a "name=httpd state=stopped".

7. Run the playbook using ansible-playbook exercise72.yaml and analyze the result. You should see that the playbook skips the service task.

8. Type ansible ansible1 -m service -a "name=httpd state=started" and run the playbook again, using ansible-playbook exercise72.yaml. Playbook execution at this point should be successful.

Variables

Using and working with variables

  • Capture command output using register

Variables

Three types of variables:

  • Fact
  • Variable
  • Magic Variable

Variables make Ansible really flexible. Especially when used in combination with conditionals. These are defined at the discretion of the user.:

---
- name: create a user using a variable
  hosts: ansible1
  vars:
    users: lisa <-- defaults value for this play
  tasks:
    - name: create a user {{ users }} on host {{ ansible_hostname }} <-- ansible fact variable
      user:
        name: "{{ users }}" <-- If value starts with variable, the whole line must have double quotes

Working with Variables

  • Variables can be used to refer to a wide range of dynamic data, such as names of files, services, packages, users, URLs to specific servers, etc.

Defining Variables

To define a variable

  • key: value structure in a vars section in the play header.
    ---
    - name: using variables
      hosts: ansible1
      vars: <-------------
        ftp_package: vsftpd <------------
      tasks:
      - name: install package
        yum:
          name: "{{ ftp_package }}" <------------
          state: latest
  • As the variable is the first item in the value, its name must be placed between double curly brackets as well as double quotes.

Variable equirements:

• Must start with a letter. • Case sensitive. • Can contain only letters, numbers, and underscores.

Using Include Files

  • It is common to define variables in include files. Specific host and host group variables can be used as include files
  • it’s also possible to include an arbitrary file as a variable file, using the vars_files: statement.
  • The vars_files: parameter can have a single value or a list providing multiple values. If a list is used, each item needs to start with a dash
  • When you include variables from files, it’s a good idea to work with a separate directory that contains all variables because that makes it easier to manage as your projects grow bigger.
    ---
    - name: using a variable include file
      hosts: ansible1
      vars_files: vars/common <--------------
      tasks:
      - name: install package
        yum:
          name: "{{ my_package }}" <------------
          state: latest

vars/common

    my_package: nmap
    my_ftp_service: vsftpd
    my_file_service: smb
  • If variables are defined in individual playbooks, they are spread all over, and it may be difficult to get an overview of all variables that are used on a site.

Managing Host and Group Variables

host_vars and group_vars

  • set variables for specific hosts or specific host groups.
  • In older versions of Ansible, it was common to set host variables and group variables in inventory, but this practice is now deprecated.

host_vars

  • Must create a subdirectory with the name host_vars within the Ansible project directory.
  • In this directory, create a file that matches the inventory name of the host to which the variables should be applied.
  • So the variables for host ansible1 are defined in host_vars/ansible1.

group_vars

  • Must create a directory with the name group_vars.
  • In this directory, a file with the name of the host group is created, and in this file all variables are defined.
  • ie: group_vars/webservers

If no variables are defined at the command prompt, it will use the variable set for the play. You can also define the variables with the -e flag when running the playbook:

[ansible@control base]$ ansible-playbook variable-pb.yaml -e users=john

PLAY [create a user using a variable] ************************************************************************************************************************

TASK [Gathering Facts] ***************************************************************************************************************************************
ok: [ansible1]

TASK [create a user john on host ansible1] *******************************************************************************************************************
changed: [ansible1]

PLAY RECAP ***************************************************************************************************************************************************
ansible1                   : ok=2    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   

LAB: Using Host and Host Group Variables

1. Create a project directory in your home directory. Type mkdir ~/chapter6 to create the chapter6 project directory, and use cd ~/chapter6 to go into this directory.

2. Type cp ../ansible.cfg . to copy the ansible.cfg file that you used before. No further modifications to this file are required.

3. Type vim inventory to create a file with the name inventory, and ensure it has the following contents:

[webservers]
ansible1

[dbservers]
ansible2

4. Create the file webservers.yaml, containing the following contents. Notice that nothing is really changed by running this playbook. It just uses the debug module to show the current value of the variables.

---
- name: configure web services
  hosts: webservers
  tasks:
  - name: this is the {{ web_package }} package
    debug:
      msg: "Installing {{ web_package }}"
  - name: this is the {{ web_service }} service
    debug:
      msg: "Starting the {{ web_service }}"

5. Create the file group_vars/webservers with the following contents:

web_package: httpd
web_service: httpd

6. Run the playbook with some verbosity to verify it is working by using ansible-playbook -vv webservers.yaml

Using Multivalued Variables

Two types of multivalued variables:

array (list)

  • key that can have multiple items as its value.
  • Each item in a list starts with a dash (-).
  • Individual items in a list can be addressed using the index number (starting at zero), as in {{ users[1] }} (which would print the key-value pairs that are set for user lisa)
    users:
      - linda:
        username: linda
        homedir: /home/linda
        shell: /bin/bash
      - lisa:
        username: lisa
        homedir: /home/lisa
        shell: /bin/bash
      - anna:
        username: anna
        homedir: /home/anna
        shell: /bin/bash

dictionary (hash)

  • Unordered collection of items, a collection of key-value pairs.
  • In Python, a dictionary is defined as my_dict = { key1: ‘car’, key2:‘bike’ }.
  • Because it is based on Python, Ansible lets users use dictionaries as an alternative notation to arrays
  • not as common in use as arrays.
  • Items in values in a dictionary are not started with a dash.
    users:
      linda:
        username: linda
        homedir: /home/linda
        shell: /bin/bash
      lisa:
        username: lisa
        homedir: /home/lisa
        shell: /bin/bash
      anna:
        username: anna
        homedir: /home/anna
        shell: /bin/bash

Addressing Specific Keys in a Dictionary Multivalued Variable:

    ---
    - name: show dictionary also known as hash
      hosts: ansible1
      vars_files:
      - vars/users-dictionary
      tasks:
      - name: print dictionary values
        debug:
          msg: "User {{ users.linda.username }} has homedirectory {{ users.linda.homedir }} and shell {{ users.linda.shell }}"

Using the Square Brackets Notation to Address Multivalued Variables (recommended method)

    ---
    - name: show dictionary also known as hash
      hosts: ansible1
      vars_files:
        - vars/users-dictionary
      tasks:
        - name: print dictionary values
          debug:
            msg: "User {{ users[’linda’][’username’] }} has homedirectory {{ users[’linda’][’homedir’] }} and shell {{ users[’linda’][’shell’]  }}"

Magic Variables

  • Variables that are set automatically by Ansible to reflect an Ansible internal state.
  • There are about 30 magic variables
  • Common Magic Variables

Image Image

  • you cannot use their name for anything else.
  • If you try to set a magic variable to another value anyway, it always resets to the default internal value.

Debug module can be used to show the current values assigned to the hostvars magic variable.

  • Shows many settings that you can change by modifying the ansible.cfg configuration file.
  • If local facts are defined on the host, you will see them also.
    [ansible@control ~]$ ansible localhost -m debug -a 'var=hostvars["ansible1"]'
    localhost | SUCCESS => {
        "hostvars[\"ansible1\"]": {
            "ansible_check_mode": false,
            "ansible_diff_mode": false,
            "ansible_facts": {},
            "ansible_forks": 5,
            "ansible_inventory_sources": [
                "/home/ansible/inventory"
            ],
            "ansible_playbook_python": "/usr/bin/python3.6",
            "ansible_verbosity": 0,
            "ansible_version": {
                "full": "2.9.5",
                "major": 2,
                "minor": 9,
                "revision": 5,
                "string": "2.9.5"
            },
            "group_names": [
                "ungrouped"
            ],
            "groups": {
                "all": [
                    "ansible1",
                    "ansible2"
                ],
                "ungrouped": [
                    "ansible1",
                    "ansible2"
                ]
            },
            "inventory_dir": "/home/ansible",
            "inventory_file": "/home/ansible/inventory",
            "inventory_hostname": "ansible1",
            "inventory_hostname_short": "ansible1",
            "omit": "__omit_place_holder__38849508966537e44da5c665d4a784c3bc0060de",
            "playbook_dir": "/home/ansible"
        }
    }

Variable Precedence

  • Avoid using variables with the same names that are defined at different levels.
  • If a variable with the same name is defined at different levels, the most specific variable always wins.
  • Variables that are defined while running the playbook command using the -e key=value command-line argument have the highest precedence.
  • After variables that are passed as command-line options, playbook variables are considered.
  • Next are variables that are defined for inventory hosts or host groups.
  • Consult the Ansible documentation item “Variable precedence” for more details and an overview of the 22 different levels where variables can be set and how precedence works for them.

1. Variables passed on the command line 2. Variables defined in or included from a playbook 3. Inventory variables

Capturing Command Output Using register

The result of commands can also be used as a variable byusing the register parameter in a task.

    ---
    - name: test register
      hosts: ansible1
      tasks:
      - shell: cat /etc/passwd
        register: passwd_contents
      - debug:
          var: "passwd_contents"

The cat /etc/passwd command is executed by the shell module. Notice that in this playbook no names are used for tasks. Using names for tasks is not mandatory; it’s just recommended in more complex playbooks because this convention makes identification of the tasks easier. The entire contents of the command are next stored in the variable passwd_contents.

This variable contains the output of the command, stored in different keys. Table 6-7 provides an overview of the most useful keys, and Listing 6-19 shows the partial result of the ansible-playbook listing618.yaml command.

Keys Used with register cmd

  • Command that was used rc
  • Return code of the command stderr
  • Error messages stderr_lines
  • Errors line by line stdout
  • command output stdout_line
  • Command output line by line
    [ansible@control ~]$ ansible-playbook listing618.yaml
    
    PLAY [test register] *******************************************************************
    
    TASK [Gathering Facts] *****************************************************************
    ok: [ansible2]
    ok: [ansible1]
    
    TASK [shell] ***************************************************************************
    changed: [ansible2]
    changed: [ansible1]
    
    TASK [debug] ***************************************************************************
    ok: [ansible1] => {
        "passwd_contents": {
            "changed": true,
            "cmd": "cat /etc/passwd",
            "delta": "0:00:00.004149",
            "end": "2020-04-02 02:28:10.692306",
            "failed": false,
            "rc": 0,
            "start": "2020-04-02 02:28:10.688157",
            "stderr": "",
            "stderr_lines": [],
            "stdout": "root:x:0:0:root:/root:/bin/bash\nbin:x:1:1:bin:/bin:/sbin/nologin\ndaemon:x:2:2:daemon:/sbin:/sbin/nologin\nadm:x:3:4:adm:/var/adm:/sbin/nologin\nlp:x:4:7:lp:/var/spool/lpd:/sbin/nologin\nsync:x:5:0:sync:/sbin:/bin/sync\nshutdown:x:6:0:shutdown:/sbin:/sbin/shutdown\nhalt:x:7:0:halt:/sbin:/sbin/halt\nansible:x:1000:1000:ansible:/home/ansible:/bin/bash\napache:x:48:48:Apache:/usr/share/httpd:/sbin/nologin\nlinda:x:1002:1002::/home/linda:/bin/bash\nlisa:x:1003:1003::/home/lisa:/bin/bash",
            "stdout_lines": [
                "root:x:0:0:root:/root:/bin/bash",
                "bin:x:1:1:bin:/bin:/sbin/nologin",
                "daemon:x:2:2:daemon:/sbin:/sbin/nologin",
                "adm:x:3:4:adm:/var/adm:/sbin/nologin",
                "lp:x:4:7:lp:/var/spool/lpd:/sbin/nologin",
                "sync:x:5:0:sync:/sbin:/bin/sync",
                "shutdown:x:6:0:shutdown:/sbin:/sbin/shutdown",
                "halt:x:7:0:halt:/sbin:/sbin/halt",
                "ansible:x:1000:1000:ansible:/home/ansible:/bin/bash",
                "apache:x:48:48:Apache:/usr/share/httpd:/sbin/nologin",
                "linda:x:1002:1002::/home/linda:/bin/bash",
                "lisa:x:1003:1003::/home/lisa:/bin/bash"
            ]
        }
    }

Ensure that a task runs only if a command produces a specific result by using register with conditionals.

register shows the values that are returned by specific tasks. Tasks have common return values, but modules may have specific return values. That means you cannot assume, based on the result of an example using a specific module, that the return values you see are available for all modules. Consult the module documentation for more information about specific return values.

Subsections of Bash

Bash

My name is Marcel and I’m partially a shell..

A shell is a program that takes commands and passes them to the operating system.^1^ This is done via terminal emulator with keyboard commands or by using scripts ran on the system. There are many shell programs that you can use on Linux. Almost all Linux distributions come with a shell called Bash. Some others include zsh, fsh, ksh, and Tcsh. (But not limited to)

Shells have different features such as built in commands, job control, alias definitions, history substitution, PATH searching, command completion, and more. Each shell has it’s own syntax, hotkeys, way of doing things. Most of them follow a standard called “POSIX” that help with script portability between shells.

You can see a list of more shells and a comparison of their features on this Wikipedia page.

Terminator emulators..

I meant Terminal Emulators! Silly me..

The Current shell

  • Where a program is executed.

Sub-shell (child shell)

  • created within a shell to run a program.

There are two types of variables. Local variables are private variables to the shell that creates it. And they are only used by programs started in the shell that created them. Environment variables are passed to any sub-shells created by the current shell. As well as any programs ran in the current and sub shells.

- Value stored in an environment variable is accessible to the program, as well as any sub-programs that it spawns during its lifecycle. 
- Any environment variable set in a sub-shell is lost when the sub-shell terminates.
- `env` or the `printenv` command to view predefined environment variables.
- Common predefined environment variables:
	- **DISPLAY** 
		- Stores the hostname or IP address for graphical terminal sessions 
	- **HISTFILE** 
		- Defines the file for storing the history of executed commands 
	- **HISTSIZE** 
		- Defines the maximum size for the HISTFILE 
	- **HOME** 
		- Sets the home directory path LOGNAME Retains the login name 
	- **MAIL** 
		- Contains the path to the user mail directory
	- **PATH** 
		- Directories to be searched when executing a command. Eliminates the need to specify the absolute path of a command to run it. 
	- **PPID** 
		- Holds the identifier number for the parent program 
	- **PS1** 
		- Defines the primary command prompt PS2 Defines the secondary command prompt 
	- **PWD** 
		- Stores the current directory location 
	- **SHELL** 
		- Holds the absolute path to the primary primary shell file
	- **TERM** 
		- Holds the terminal type value 
	- **UID** 
		- Holds the logged-in user’s UID 
	- **USER** 
		- Retains the name of the logged-in user

Setting and unsetting variables

  • export, unset, and echo to define and undefine environment variables
  • Use uppercase for variables

echo command

  • Restricted to showing the value of a specific variable

env command

  • Displays the environment variables only.
[root@localhost ~]# env
SHELL=/bin/bash
HISTCONTROL=ignoredups
HISTSIZE=1000
HOSTNAME=localhost
PWD=/root
LOGNAME=root
XDG_SESSION_TYPE=tty
MOTD_SHOWN=pam
HOME=/root
LANG=en_US.UTF-8
LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=01;37;41:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.zst=01;31:*.tzst=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.wim=01;31:*.swm=01;31:*.dwm=01;31:*.esd=01;31:*.jpg=01;35:*.jpeg=01;35:*.mjpg=01;35:*.mjpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.webp=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=01;36:*.au=01;36:*.flac=01;36:*.m4a=01;36:*.mid=01;36:*.midi=01;36:*.mka=01;36:*.mp3=01;36:*.mpc=01;36:*.ogg=01;36:*.ra=01;36:*.wav=01;36:*.oga=01;36:*.opus=01;36:*.spx=01;36:*.xspf=01;36:
SSH_CONNECTION=192.168.0.233 56990 192.168.0.169 22
XDG_SESSION_CLASS=user
SELINUX_ROLE_REQUESTED=
TERM=xterm-256color
LESSOPEN=||/usr/bin/lesspipe.sh %s
USER=root
SELINUX_USE_CURRENT_RANGE=
SHLVL=1
XDG_SESSION_ID=1
XDG_RUNTIME_DIR=/run/user/0
SSH_CLIENT=192.168.0.233 56990 22
which_declare=declare -f
PATH=/root/.local/bin:/root/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
SELINUX_LEVEL_REQUESTED=
DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/0/bus
MAIL=/var/spool/mail/root
SSH_TTY=/dev/pts/0
BASH_FUNC_which%%=() {  ( alias;
 eval ${which_declare} ) | /usr/bin/which --tty-only --read-alias --read-functions --show-tilde --show-dot $@
}
_=/usr/bin/env
OLDPWD=/dev/vg200

printenv command

  • Displays the environment variables only.

set command

  • View both local and environment variables.

Command and Variable substitutions

  • PS1 Environment variable sets what the prompt looks like.
  • Default value is \u@\h \W\$
    • \u
      • logged-in user name
    • \h
      • System hostname
    • \W
      • Working directory
    • \$
      • End of command prompt
  • Command whose output you want assigned to a variable must be encapsulated within either backticks hostname or parentheses $(hostname).

Input, Output, and Error Redirections

  • Input, output, and error character streams:
    • standard input (or stdin)
      • Input redirection
      • <
    • standard output (or stdout)
        • Append instead of overwrite
      • &>
        • Redirect standard error and output
    • standard error (or stderr)
      • 2>
  • File descriptors:
    • 0, 1, and 2
      • 1
        • Represents standard output location
    • Can use these to represent character streams instead of <, and >
  • noclobber feature
    • prevent overwriting of the output file
    • set -o noclobber
      • activates the feature
    • set +o noclobber
      • deactivate the feature
[root@localhost ~]# vim test.txt
[root@localhost ~]# set -o noclobber
[root@localhost ~]# echo "Hello" > test.txt
-bash: test.txt: cannot overwrite existing file
[root@localhost ~]# set +o noclobber
[root@localhost ~]# echo "Hello" > test.txt
[root@localhost ~]# cat test.txt
Hello

History Substitution

  • Command history or history expansion.
  • Can disable and re-enable if required.
  • Values may be altered for individual users by editing .bashrc or .bash_profile in the user’s home directory.
  • Three variables
    • HISTFILE
      • Defines the name and location of the history file to be used to store command history,
      • Default is .bash_history in the user’s home directory.
    • HISTSIZE
      • Size of the history buffer for the current shell.
    • HISTFILESIZE
      • Sets the maximum number of commands allowed for storage in the history file at the beginning of the current session and are written to the HISTFILE from memory at the end of the current terminal session.
      • Usually, HISTSIZE and HISTFILESIZE are set to a common value.

history command

  • Displays or reruns previously executed commands.
  • Gets the history data from the system memory as well as from the .bash_history file.
  • Shows all entries by default.
  • set +o history
    • disable history expansion
  • set -o history
    • re-enable history expansion
[root@localhost ~]# set +o history

[root@localhost ~]# history | tail 
  126  ls
  127  vim test.txt
  128  set -o noclobber
  129  echo "Hello" > test.txt
  130  set +o noclobber
  131  echo "Hello" > test.txt
  132  cat test.txt
  133  history | tail 
  134  set +0 history
  135  set +o history
  
[root@localhost ~]# vim test2.txt

[root@localhost ~]# history | tail 
  126  ls
  127  vim test.txt
  128  set -o noclobber
  129  echo "Hello" > test.txt
  130  set +o noclobber
  131  echo "Hello" > test.txt
  132  cat test.txt
  133  history | tail 
  134  set +0 history
  135  set +o history
  
[root@localhost ~]# set -o history

[root@localhost ~]# vim test2.txt

[root@localhost ~]# history | tail 
  128  set -o noclobber
  129  echo "Hello" > test.txt
  130  set +o noclobber
  131  echo "Hello" > test.txt
  132  cat test.txt
  133  history | tail 
  134  set +0 history
  135  set +o history
  136  vim test2.txt
  137  history | tail 

Add timestamps to history output system wide: echo "export HISTTIMEFORMAT='%F %T '" >> /etc/profile && source /etc/profile

Editing at the Command line

Common key combinations:

  • Ctrl+a / Home
    • Moves the cursor to the beginning of the command line
  • Ctrl+e / End
    • Moves the cursor to the end of the command line
  • Ctrl+u
    • Erase everything at and before cursor
  • Ctrl+k
    • Erase Everything at cursor and after
  • Alt+f
    • Moves the cursor to the right one word at a time
  • Alt+b
    • Moves the cursor to the left one word at a time
  • Ctrl+f / Right arrow
    • Moves the cursor to the right one character at a time
  • Ctrl+b / Left arrow
    • Moves the cursor to the left one character at a time

Tab completion

  • Hitting the Tab key twice automatically completes the entire name.
  • In case of multiple possibilities matching the entered characters, it completes up to the point they have in common and prints the rest of the possibilities on the screen.

Tilde Substitution (tilde expansion)

  • Performed on words that begin with the tilde character (~). ~
    • refers to user’s home directory.

~+ - refers to current directory

~- - Refers to previous working directory.

~USER - Refers to specific user’s home directory.

Alias Substitution (command aliasing or alias)

  • Define shortcuts for commands.
  • Bash shell includes several predefined aliases that are set during user login.
  • Shell gives precedent to an alias if an alias matches a command or program.
    • can still run the command without using the alias but preceding command with a backslash. \

alias command

  • Set an alias.
  • Internal shell command.
[root@localhost ~]# alias
alias cp='cp -i'
alias egrep='egrep --color=auto'
alias fgrep='fgrep --color=auto'
alias grep='grep --color=auto'
alias l.='ls -d .* --color=auto'
alias ll='ls -l --color=auto'
alias ls='ls --color=auto'
alias mv='mv -i'
alias rm='rm -i'
alias xzegrep='xzegrep --color=auto'
alias xzfgrep='xzfgrep --color=auto'
alias xzgrep='xzgrep --color=auto'
alias zegrep='zegrep --color=auto'
alias zfgrep='zfgrep --color=auto'
alias zgrep='zgrep --color=auto'
[root@localhost ~]# alias frog='pwd'
[root@localhost ~]# frog
/root

unalias command

  • Unset an alias.
  • Internal shell command.
[root@localhost ~]# unalias frog
[root@localhost ~]# frog
-bash: frog: command not found

Metacharacters and Wildcard

  • Metacharacters
    • Special characters that possess special meaning to the shell.
    • Used in pattern matching (a.k.a. filename expansion or file globbing) and regular expressions. dollar sign ($)
  • mark the end of a line
  • Used in regular expressions

caret (^)

  • mark the beginning of a line
  • Used in regular expressions

period (.)

  • Match a single position
  • Used in regular expressions

asterisk (*)

  • used in pattern matching
  • wildcard character
  • Matches zero to an unlimited number of characters except for the leading period (.) in a hidden filename.
  • Used in regular expressions

question mark (?)

  • Used in pattern matching
  • Wildcard character
  • Matches exactly one character except for the leading period in a hidden filename.
  • Used in regular expressions

pipe (|)

  • Send the output of one command as input to the next.
  • Also used to define alternations in regular expressions.
  • Can use as many times in a command as you need. (pipeline) - Can be used as an OR operator (alternation)
    • this|that|other

angle brackets (< >)

  • Redirections

curly brackets ({})

  • Used in regular expressions
  • Match an element a specific number of times

square brackets ([])

  • Used in pattern matching
  • Wildcard character
  • Match either a set of characters or a range of characters for a single character position.
  • Order in which they are listed has no importance.
  • Range of characters must be specified in a proper sequence such as [a-z] or [0-9].
  • Used in regular expressions

parentheses (())

  • Create a sub shell

plus (+)

  • Match a character one or more time

exclamation mark (!)

  • inverse matches

semicolon (;)

  • Run a second command after the ;

Quoting Mechanisms

  • Disable special meaning of metacharacters

Backslash (\)

  • Escape character
  • Cancel out a special character’s meaning.

single quotation (‘’)

  • Mask the meaning of all encapsulated special characters.

double quotation (“”)

  • Mask the meaning of all but the backslash (), dollar sign ($), and single quotes (‘’).

Regular expressions (regexp or regex)

  • Text pattern or an expression that is matched against a string of characters in a file or supplied input in a search operation.
  • Pattern may include a single character, multiple random characters, a range of characters, word, phrase, or an entire sentence.
  • Any pattern containing one or more white spaces must be surrounded by quotation marks.

grep command

  • Searches the contents of one or more text files or input supplied for a match.

Flags -i

  • case insensitive search

-n

  • number the lines

-v

  • exclude these lines

-w

  • find an exact match for a word

-E

  • match one or the other

-e

  • -Use patterns for matching

Jobs

  • job
    • Process that is started in the background and controlled by the terminal where it is spawned.
    • Assigned a PID (process identifier) by the kernel and, additionally, a job ID by the shell.
    • Does not hold the terminal window where it is initiated.
    • Can run multiple job simultaneously.
    • Can be brought to foreground, returned to the background, suspended, or stopped.
  • job control
    • management of multiple jobs within a shell environment

commands and control sequences for administering the jobs.

  • jobs
    • Shell built-in command
    • display jobs.
  • bg
    • Shell built-in command
    • Move a job to the background or restart a job in the background that was suspended with Ctrl+z.
  • fg
    • Shell built-in command
    • Move a job to the foreground
  • Ctrl+z
    • Suspends a foreground job and allows the terminal window to be used for other purposes

jobs command

output plus sign (+) - indicates the current background job minus sign (-) - signifies the previous job. Stopped - currently suspended - can be signaled to continue their execution with bg or fg

Shell Startup Files

  • Sourced by the shell following user authentication at the time of logging in and before the command prompt appears.
  • Aliases, functions, and scripts can be added to these files as well.
  • two types of startup files:
    • system-wide
      • Set the general environment for all users at the time of their login to the system.
      • Located in the /etc directory
      • Maintained by the Linux admin.
      • System-wide startup files for bash shell users:
        • /etc/bashrc
          • Defines functions and aliases, sets umask for user accounts with a non-login shell, establishes the command prompt, etc.
          • May include settings from the shell scripts located in the /etc/profile.d directory.
        • /etc/profile
          • Sets common environment variables such as PATH, USER, LOGNAME, MAIL, HOSTNAME, HISTSIZE, and HISTCONTROL for all users, establishes umask for user accounts with a login shell, processes the shell scripts located in the /etc/profile.d directory, and so on.
        • /etc/profile.d
          • Contains scripts for bash shell users that are executed by the /etc/profile file.
          • Files can be edited and updated.
    • per-user
      • Override or modify system default definitions set by the system-wide startup files.
      • By default, two files, in addition to the .bash_logout file, are located in the skeleton directory /etc/skel and are copied into user home directories at the time of user creation.
      • .bashrc
        • Defines functions and aliases. This file sources global definitions from the /etc/bashrc file.
      • .bash_profile
        • Sets environment variables and sources the .bashrc file to set functions and aliases.
      • .gnome2/
        • Directory that holds environment settings when GNOME desktop is started. Only available if GNOME is installed.
      • .bash_logout
        • Executed when the user leaves the shell or logs off.
        • May be customized
  • Startup file order:
    • /etc/profile > .bash_profile > .bashrc > /etc/bashrc
  • Per user settings must be added to the appropriate file for persistence.

Bash Labs

Lab: View environment variables

env
printenv

Lab: Viewing Environment variable values

  1. View the value for the PATH variable
echo $PATH
  1. View the value for the HOME variable
echo $HOME
  1. View the value for the SHELL variable
echo $SHELL
  1. View the value for the TERM variable
echo $TERM
  1. View the value for the PPID variable
echo $PPID
  1. View the value for the PS1 variable
echo $PS1
  1. View the value for the USER variable
echo $USER

Lab: Setting and Unsetting Variables

  1. Define a local variable called VR1:
VR1=RHEL9
  1. View the value of VR1:
echo $VR1
  1. Type bash at the command prompt to enter a sub-shell and then run echo $VR1 to check whether the variable is visible in the sub-shell.
echo $VR1
  1. Exit out of the subshell:
exit
  1. Turn VR1 into an environment variable:
export VR1
  1. Type bash at the command prompt to enter a sub-shell and then run echo $VR1 to check whether the variable is visible in the sub-shell.
echo $VR1
  1. Undefine this variable and erase it from the shell environment:
unset VR1
  1. Define a local variable that contains a value with one or more white spaces:
VR2="I love RHEL 9"
  1. Define and make the variable an environment variable at the same time:
export VR3="I love RHEL 9"
  1. View local and environment variables:
set

Lab: Modify Primary Command Prompt

  1. Change the value of the variable PS1 to reflect the desired information:
export PS1="< $LOGNAME on $HOSTNAME in \$PWD > " 
  1. Edit the .bash_profile file for user1 and define the value exactly as it was run in Step 1.
vim .bash_profile
  1. Test by logging off as user1 and logging back in. The new command prompt will be displayed.

Lab: Redirecting Standard Input

  1. Have the cat command read the /etc/redhat-release file and display its content on the standard output (terminal screen):
cat < /etc/redhat-release

Lab: Redirecting Standard Output

  1. Direct the ls command output to a file called ls.out:
ls > ls.out
  1. Do the same thing but using file descriptors:
ls 1> ls.out
  1. Activate the noclobber feature then try the redirect feature again:
set -o noclobber
ls > ls.out
  1. Deactivate noclobber
set +o noclobber
  1. Direct the ls command to append the output to the ls.out file instead of overwriting it:
ls >> ls.out
or
ls 1>> ls.out

Lab: Redirecting Standard Error

  1. Direct the find command issued as a normal user to search for all occurrences of files by the name core in the entire directory tree and sends any error messages produced to /dev/null
find / -name core -print 2> /dev/null
  1. Redirect both standard output and error:
ls /usr /cdr &> outerr.out
or
ls /usr /cdr 1> outerr.out 2>&1
\# means to redirect file descriptor 1 to file outerr.out as well as to file descriptor 2.
  1. Same as above but append to file:
ls /usr/cdr &>> outerr.out

Lab: Show History Variables

  1. View HISTFILE Variable:
echo $HISTFILE
  1. View HISTSIZE variable:
echo $HISTSIZE
  1. View HISTFILESIZE variable:
echo $HISTFILESIZE

Lab: History command

  1. Rune history without any options:
history
  1. Display 10 entries:
history 10
  1. Run the 15th command in history:
!15
  1. re-execute the most recent occurrence of a command that started with a particular letter or series of letters (ch for example):
!ch
  1. Issue the most recent command that contained “grep”:
!?grep?
  1. Remove entry 24 from history:
history -d 24
  1. Repeat the last command executed:
!!

Lab: Tilde expansion

  1. Display user’s home directory:
echo ~
  1. Display the current directory:
echo ~+
  1. Display the previous working directory:
echo ~-
  1. Display user1’s home directory:
echo ~user1
  1. cd into the home directory of user1 and confirm:
cd ~user1
pwd
  1. cd into a subdirectory:
cd ~/Documents/
  1. View directory information of the root user’s desktop:
ls -ld ~root/Desktop

Lab: Command Aliasing

  1. shows all aliases that are currently set for user1:
su - user1
alias
  1. Run alias as root:
alias
  1. Create an alias “search” to abbreviate the find command with several switches and arguments. Enclose the entire command within single quotation marks (‘’) to ensure white spaces are taken care of. Do not leave any spaces before and after the equal sign (=).
alias search='find / -name core -exec ls -l {} \;'
  1. Search with the new alias:
search
  1. Create and alias by the same name as rm command but adding the -i flag:
alias rm='rm -i'
  1. Run rm without using the alias:
rm file1
  1. Remove the two aliases we just created:
unalias search rm

Lab: Wildcards and Metacharacters

  1. List all files in the /etc directory that begin with letters “ma” and followed by any characters:
ls /etc/ma*
  1. List all hidden files and directories in /home/user1:
ls -d .*
  1. List all files in the /var/log directory that end in “.log”:
ls /var/log/*.log
  1. List all files and directories under /var/log with exactly four characters in their names:
ls -d /var/log/????
  1. Include all files and directories that begin with either of the two characters and followed by any number of characters.
ls /usr/bin/[yw]*
  1. Match all directory names that begin with any letter between “m” and “o” in the /etc/systemd/system directory:
ls -d /etc/systemd/system/[m-o]*
  1. Inverse results of the previous:
ls -d /etc/systemd/system/[!m-o]*

Lab: Piping

  1. Pipe the output to the less command in order to view the directory listing one screenful at a time:
ls -l /etc | less
  1. Run the last command and pipe the output to nl to number each line:
last | nl
  1. Send the output of ls to grep for the lines that do not contain the pattern “root”. Piped again for a case-insensitive selection of all lines that exclude the pattern “dec”. Number the output, and show the last four lines on the display:
ls -l /proc | grep -v root | grep -iv dec | nl | tail -4

Lab: Quoting Mechanisms

  1. Remove a file called * without deleting everything in the directory:
rm \*
  1. Display $LOGNAME without expanding the LOGNAME variable:
echo '$LOGNAME'
  1. Run the following with double quotes and without:
echo "$SHELL"
echo "\$PWD"
echo "'\'"

Lab: grep and regex

  1. Search for the pattern “operator” in the /etc/passwd file:
grep operator /etc/passwd
  1. Search for the space-separated pattern “aliases and functions” in the $HOME/.bashrc file:
grep 'aliases and functions' .bashrc
  1. Search for the pattern “nologin” in the passwd file and exclude (-v) the lines in the output that contain this pattern. Add the -n switch to show the line numbers associated with the matched lines.
grep -nv nologin /etc/passwd
  1. Find any duplicate entries for the root user in the passwd file. Prepend the caret sign (^) to the pattern “root” to mark the beginning of a line.
grep ^root /etc/passwd
  1. Identify all users in the passwd file with bash as their primary shell.
grep bash$ /etc/passwd
  1. Show the entire login.defs file but exclude all the empty lines:
grep -v ^$ /etc/login.defs
  1. Perform a case-insensitive search (-i) for all the lines in the /etc/bashrc file that match the pattern “path.”
grep -i path /etc/bashrc
  1. Print all the lines from the /etc/lvm/lvm.conf file that contain an exact match for a word (-w) “acce” followed by exactly two characters:
grep -w acce.. /etc/lvm/lvm.conf
  1. Print all the lines from the ls command output that include either (-E) the pattern “cron” or “ly”.
ls -l /etc | grep -E 'cron|ly'
  1. Show all the lines from the /etc/ssh/sshd_config file but exclude (-v) the empty lines and commented lines. Use the -e flag multiple times instead of | for either or.
sudo grep -ve ^$ -ve ^# /etc/ssh/sshd_config
  1. Learn more about regex:
man 7 regex
  1. Consult the grep man pages:
man grep

Lab: Managing jobs

  1. Issue the jobs command with the -l switch to view all the jobs running in the background:
jobs -l
  1. bring job ID 1 to the foreground and start running it:
fg %1
  1. Suspend job 1 with ctrl+z and then let it run in the background:
bg %1
  1. Terminate job ID 1, supply its PID (31726) to the kill command:
kill 31726

Lab: Shell Startup Files

  1. View the first 10 lines of /etc/bashrc:
head /etc/bashrc
  1. View the first 10 lines of /etc/profile:
head /etc/profile
  1. View the directory /etc/profile.d
ls -l /etc/profile.d/
  1. View .bashrc
cat ~/.bashrc
  1. View .bash_profile
cat ~/.bash_profile

Lab: Customize the Command Prompt (user1)

  1. Permanently customize the primary shell prompt to display “<user1@server1 in /etc >:” when this user switches into the /etc directory. The prompt should always reflect the current directory directory path.
vim ~/.bash_profile
export PS1='$USERNAME $PWD'

Lab 7-2: Redirect the Standard Input, Output, and Error (user1)

  1. Run the ls command on /etc, /dvd, and /var. Have the output printed on the screen and the errors forwarded to file /tmp/ioerror.
ls /etc /dvd /var 2> /tmp/ioerror
  1. Check the file after the command execution and analyze the results:
cat /tmp/ioerror

Notes

  1. It is NOT, a hard, protective outer layer usually created by an animal or organism that lives in the sea.

Shell Scripting

Shell Scripts

  • A group of Linux commands along with control structures and optional comments stored in a text file.

  • Can be executed directly at the Linux command prompt.

  • Do not need to be compiled as they are interpreted by the shell line by line.

  • Managing packages and users, administering partitions and file systems, monitoring file system utilization, trimming log files, archiving and compressing files, finding and removing unnecessary files, starting and stopping database services and applications, and producing reports.

  • Run by the shell one at a time in the order in which they are listed.

  • Each line is executed as if it is typed and run at the command prompt.

  • Control structures are utilized for creating and managing conditional and looping constructs.

  • Comments are also generally included to add information about the script such as the author name, creation date, previous modification dates, purpose, and usage.

  • If the script encounters an error during execution, the error message is printed on the screen.

  • Can use the nl command to enumerate the lines for troubleshooting.

  • Can store your scripts in the /usr/local/bin directory, which is included in the PATH of all users by default.

Script01: Displaying System Information

  • Create the first script called sys_info.sh in /usr/local/bin/
  • Use the vim editor with sudo to write the script.
#!/bin/bash
echo "Display Basic System Information"
echo "=================================="
echo
echo "The hostname, hardware, and OS information is:"
/usr/bin/hostnamectl
echo
echo "The Following users are currently logged in:"
/usr/bin/who
  • Within vim, press the ESC key and then type :set nu to view line numbers associated with each line entry.
  • Must add execute bit to run the script

Executing a Script

chmod +x /usr/local/bin/sys_info.sh

ll /usr/local/bin/sys_info.sh
-rwxr-xr-x. 1 root root 244 Jul 30 09:47 /usr/local/bin/sys_info.sh>)
  • Any user on the system can now run this script using either its name or the full path.

Let’s run the script and see what the output will look like:

$ sys_info.sh
Display Basic System Information
==================================

The hostname, hardware, and OS information is:
 Static hostname: server30
       Icon name: computer-vm
         Chassis: vm 🖴
      Machine ID: eaa6174e108d4a27bd619754…
         Boot ID: 13d8b3c167b24757b3678e4f…
  Virtualization: oracle
Operating System: Red Hat Enterprise Linux…
     CPE OS Name: cpe:/o:redhat:enterprise…
          Kernel: Linux 5.14.0-362.24.1.el…
    Architecture: x86-64
 Hardware Vendor: innotek GmbH
  Hardware Model: VirtualBox
Firmware Version: VirtualBox

The Following users are currently logged in:
root     pts/0        2024-07-30 07:22 (172.16.7.95)

Debugging a Script

Can either append the -x option to the “#!/bin/bash” at the beginning of the script to look like “#!/bin/bash -x”, or execute the script as follows:

[root@server30 ~]# bash -x sys_info.sh
+ echo 'Display Basic System Information'
Display Basic System Information
+ echo ==================================
==================================
+ echo

+ echo 'The hostname, hardware, and OS information is:'
The hostname, hardware, and OS information is:
+ /usr/bin/hostnamectl
 Static hostname: server30
       Icon name: computer-vm
         Chassis: vm 🖴
      Machine ID: eaa6174e108d4a27bd6197548ce77270
         Boot ID: 13d8b3c167b24757b3678e4fd3fe19ee
  Virtualization: oracle
Operating System: Red Hat Enterprise Linux 9.3 (Plow)     
     CPE OS Name: cpe:/o:redhat:enterprise_linux:9::baseos
          Kernel: Linux 5.14.0-362.24.1.el9_3.x86_64
    Architecture: x86-64
 Hardware Vendor: innotek GmbH
  Hardware Model: VirtualBox
Firmware Version: VirtualBox
+ echo

+ echo 'The Following users are currently logged in:'
The Following users are currently logged in:
+ /usr/bin/who
root     pts/0        2024-07-30 07:22 (172.16.7.95)
  • Actual lines from the script prefixed by the + sign and followed by the command execution result.
  • Shows the line number of the problem line in the output if there is any.
  • This way you can identify any issues pertaining to the path, command name, use of special characters, etc., and address it quickly.

Change one of the echo commands in the script to “iecho” and re-run the script in the debug mode to see the error:

[root@server30 ~]# bash -x sys_info.sh
+ echo 'Display Basic System Information'
Display Basic System Information
+ echo ==================================
==================================
+ iecho
/usr/local/bin/sys_info.sh: line 4: iecho: command not found
+ echo 'The hostname, hardware, and OS information is:'
The hostname, hardware, and OS information is:
+ /usr/bin/hostnamectl
 Static hostname: server30
       Icon name: computer-vm
         Chassis: vm 🖴
      Machine ID: eaa6174e108d4a27bd6197548ce77270
         Boot ID: 13d8b3c167b24757b3678e4fd3fe19ee
  Virtualization: oracle
Operating System: Red Hat Enterprise Linux 9.3 (Plow)     
     CPE OS Name: cpe:/o:redhat:enterprise_linux:9::baseos
          Kernel: Linux 5.14.0-362.24.1.el9_3.x86_64
    Architecture: x86-64
 Hardware Vendor: innotek GmbH
  Hardware Model: VirtualBox
Firmware Version: VirtualBox
+ echo

+ echo 'The Following users are currently logged in:'
The Following users are currently logged in:
+ /usr/bin/who
root     pts/0        2024-07-30 07:22 (172.16.7.95)

Script02: Using Local Variables

  • Create a script called use_var.sh
  • Define a local variable and display its value on the screen.
  • Re-check the value of the variable after the script execution has completed.
[root@server30 ~]# vim /usr/local/bin/use_var.sh

#!/bin/bash

echo "Setting a Local Variable"
echo "========================"
SYSNAME=server30.example.com
echo "The hostname of this system is $SYSNAME"
[root@server30 ~]# chmod +x /usr/local/bin/use_var.sh

[root@server30 ~]# use_var.sh
Setting a Local Variable
========================
The hostname of this system is server30.example.com

If you run the echo command to see what is stored in the SYSNAME variable, you will get nothing:

[root@server30 ~]# echo $SYSNAME

Script03: Using Pre-Defined Environment Variables

The following script called pre_env.sh will display the values of SHELL and LOGNAME environment variables:

[root@server30 ~]# vim /usr/local/bin/pre_env.sh

#!/bin/bash
echo "The location of my shell command is:"
echo $SHELL
echo "I am logged in as $LOGNAME"
[root@server30 ~]# chmod +x /usr/local/bin/pre_env.sh

[root@server30 ~]# pre_env.sh
The location of my shell command is:
/bin/bash
I am logged in as root

Script04: Using Command Substitution

  • Can use the command substitution feature of the bash shell and store the output generated by the command into a variable.

  • Two different ways to use command substitution: Backtics or subshell

#!/bin/bash
SYSNAME=$(hostname)
KERNVER=`uname -r`
echo "The hostname is $SYSNAME"
echo "The kernel version is $KERNVER"
[root@server30 ~]# vim /usr/local/bin/cmd_out.sh

[root@server30 ~]# chmod +x /usr/local/bin/cmd_out.sh

[root@server30 ~]# cmd_out.sh
The hostname is server30
The kernel version is 5.14.0-362.24.1.el9_3.x86_64

Shell Parameters

  • An entity that holds a value such as a name, special character, or number.
  • The parameter that holds a name is referred to as a variable
  • A parameter that holds a special character is referred to as a special parameter
    • Represents the command or script itself ($0), count of supplied arguments ($* or $@), all arguments ($#), and PID of the process ($$)
  • One or more digits, except for 0 is referred to as a positional parameter (a command line argument).
    • ($1, $2, $3 . . .) is an argument supplied to a script at the time of its invocation
    • Position is determined by the shell based on its location with respect to the calling script.
    • Positional parameters beyond 9 are to be enclosed in curly brackets.

  • Just like the variable and command substitutions, the shell uses the dollar ($) sign for special and positional parameter expansions as well.

Script05: Using Special and Positional Parameters

Create com_line_arg.sh to show the supplied arguments, total count, value of the first argument, and PID of the script:

[root@server30 ~]# vim /usr/local/bin/com_line_arg.sh

#!/bin/bash
echo "There are $# arguments specified at the command line"
echo "The arguments supplied are: $*"
echo "The first argument is: $1"
echo "The Process ID of the script is: $$"                                          
[root@server30 ~]# chmod +x /usr/local/bin/com_line_arg.sh

[root@server30 ~]# com_line_arg.sh
There are 0 arguments specified at the command line
The arguments supplied are: 
The first argument is: 
The Process ID of the script is: 1935

[root@server30 ~]# com_line_arg.sh the dog jumped over the frog
There are 6 arguments specified at the command line
The arguments supplied are: the dog jumped over the frog
The first argument is: the
The Process ID of the script is: 1936

Script06: Shifting Command Line Arguments

shift command

  • Used to move arguments one position to the left.
  • During this move, the value of the first argument is lost.
[root@server30 ~]# vim /usr/local/bin/com_line_arg_shift.sh

#!/bin/bash
echo "There are $# arguments specified at the command line"
echo "The arguments supplied are: $*"
echo "The first argument is: $1"
echo "The Process ID of the script is: $$"
shift
echo "The new first argument after the first shift is: $1"
shift
echo "The new first argument after the second shift is: $1"
[root@server30 ~]# chmod +x /usr/local/bin/com_line_arg_shift.sh

[root@server30 ~]# com_line_arg_shift.sh
There are 0 arguments specified at the command line
The arguments supplied are: 
The first argument is: 
The Process ID of the script is: 1941
The new first argument after the first shift is: 
The new first argument after the second shift is: 

[root@server30 ~]# com_line_arg_shift.sh the dog jumped over the frog
There are 6 arguments specified at the command line
The arguments supplied are: the dog jumped over the frog
The first argument is: the
The Process ID of the script is: 1942
The new first argument after the first shift is: dog
The new first argument after the second shift is: jumped
  • Multiple shifts in a single attempt may be performed by furnishing a count of desired shifts to the shift command as an argument. For example, “shift 2” will carry out two shifts, “shift 3” will make three shifts, and so on.

Logical Constructs

  • Use test conditions, which decides what to do next based on the true or false status of the condition.

The shell offers two logical constructs: if-then-fi case

Exit Codes (exit values)

  • Refer to the value returned by a command when it finishes execution.
    • Value is based on the outcome of the command.
  • If the command runs successfully, you typically get a zero exit code(return code), otherwise you get a non-zero value.
  • Return code is stored in a special shell parameter called ? (question mark).

Let’s look at the following two examples to understand their usage:

[root@server30 ~]# pwd
/root
[root@server30 ~]# echo $?
0
[root@server30 ~]# man
What manual page do you want?
For example, try 'man man'.
[root@server30 ~]# echo $?
1
  • Exit code was returned and stored in the ?.
  • Non-zero exit code was stored in ? with an error.
  • Can define exit codes within a script at different locations to help debug the script by knowing where exactly it terminated.

Test Conditions

  • Used in logical constructs to decide what to do next.
  • Can be set on integer values, string values, or files using the test command or by enclosing them within the square brackets [].
man test

Operation on Integer Value

integer1 -eq (-ne) integer2

  • Integer1 is equal (not equal) to integer2

integer1 -lt (-gt) integer2

  • Integer1 is less (greater) than integer2

integer1 -le (-ge) integer2

  • Integer1 is less (greater) than or equal to integer2

Operation on String Value

string1=(!=)string2

  • Tests whether the two strings are identical (not identical)

-l string or -z string

  • Tests whether the string length is zero

string or -n string

  • Tests whether the string length is non-zero

Operation on File

-b (-c) file

  • Tests whether the file is a block (character) device file

-d (-f) file

  • Tests whether the file is a directory (normal file)

-e (-s) file

  • Tests whether the file exists (non-empty)

-L file

  • Tests whether the file is a symlink

-r (-w) (-x) file

  • Tests whether the file is readable (writable) (executable)

-u (-g) (-k) file

  • Tests whether the file has the setuid (setgid) (sticky) bit

file1 -nt (-ot) file2

  • Tests whether file1 is newer (older) than file2

Logical Operators

!

  • The logical NOT operator

-a or && (two ampersand characters)

  • The logical AND operator. Both operands must be true for the condition to be true. Syntax: [ -b file1 && -r file1 ]

-o or || (two pipe characters)

  • The logical OR operator. Either of the two or both operands must be true for the condition to be true. Syntax: [ (x == 1 -o y == 2) ]

if-then-fi Construct

  • Evaluates the condition for true or false.
  • Executes the specified action if the condition is true.
  • Otherwise, it exits the construct.
  • Begins with an if and ends with a fi
  • Can execute an action only if the specified condition is true. It quits the statement if the condition is untrue.

The general syntax of this statement is as follows:

if condition > then > action > fi

Script07: The if-then-fi Construct

Create if_then_fi.sh to determine the number of arguments and print an error message if there are none provided:

[root@server30 ~]# vim /usr/local/bin/if_then_fi.sh

#!/bin/bash 
if [ $# -ne 2 ] # Ensure there is a space after [ and before ] 
then 
        echo "Error: Invalid number of arguments supplied" 
        echo "Usage: $0 source_file destination_file" 
exit 2 
fi 
echo "Script terminated"
[root@server30 ~]# chmod +x /usr/local/bin/if_then_fi.sh

[root@server30 ~]# if_then_fi.sh
Error: Invalid number of arguments supplied
Usage: /usr/local/bin/if_then_fi.sh source_file destination_file

This script will display the following messages on the screen if it is executed without exactly two arguments specified at the command line:

[root@server30 ~]# if_then_fi.sh
Error: Invalid number of arguments supplied
Usage: /usr/local/bin/if_then_fi.sh source_file destination_file

Return code value reflects the exit code in the script .

[root@server30 ~]# echo $?
2

Return code will be 0 if you supply a pair of arguments:

[root@server30 ~]# if_then_fi.sh a b
Script terminated
[root@server30 ~]# echo $?
0

if-then-else-fi Construct

  • Can execute an action if the condition is true and another action if the condition is false.

The general syntax of this statement is as follows:

if condition > then > action1 > else > action2 > fi

Script08: The if-then-else-fi Construct

Create a script called if_then_else_fi.sh that will accept an integer value as an argument and tell if the value is positive or negative:

vim /usr/local/bin/if_then_else_fi.sh
#!/bin/bash
if [ $1 -gt 0 ]
then
        echo "$1 is a positive integer value"
else
        echo "$1 is a negative integer value"
fi
[root@server30 ~]# chmod +x /usr/local/bin/if_then_else_fi.sh
[root@server30 ~]# if_then_else_fi.sh
/usr/local/bin/if_then_else_fi.sh: line 2: [: -gt: unary operator expected
 is a negative integer value
[root@server30 ~]# if_then_else_fi.sh 3
3 is a positive integer value
[root@server30 ~]# if_then_else_fi.sh -3
-3 is a negative integer value
[root@server30 ~]# if_then_else_fi.sh a
/usr/local/bin/if_then_else_fi.sh: line 2: [: a: integer expression expected
a is a negative integer value

[root@server30 ~]# echo $?
0

The if-then-elif-fi Construct

  • Can define multiple conditions and associate an action with each one of them.
  • The action corresponding to the true condition is performed.

The general syntax of this statement is as follows:

if condition1 > then action1 > elif condition2 > then action2 > elif condition3 > then action3 > else > action(n) > fi

Script09: The if-then-elif-fi Construct (Example 1)

Create if_then_elif_fi.sh script to accept an integer value as an argument and tell if the integer is positive, negative, or zero. If a non-integer value or no argument is supplied, the script will complain. Employ the exit command after each action to help you identify where it exited.

[root@server30 ~]# vim /usr/local/bin/if_then_elif_fi.sh
#!/bin/bash
if [ $1 -gt 0 ]
then
        echo "$1 is a positive integer value"
exit 1
elif [ $1 -eq 0 ]
then
        echo "$1 is a zero integer value"
exit 2
elif [ $1 -lt 0 ]
then
        echo "$1 is a negative integer value"
exit 3
else
        echo "$1 is not an integer value. Please supply an i
nteger."
exit 4
fi
[root@server30 ~]# if_then_elif_fi.sh -0
-0 is a zero integer value

[root@server30 ~]# echo $?
2

[root@server30 ~]# if_then_elif_fi.sh -1
-1 is a negative integer value

[root@server30 ~]# echo $?
3

[root@server30 ~]# if_then_elif_fi.sh 10
10 is a positive integer value

[root@server30 ~]# echo $?
1

[root@server30 ~]# if_then_elif_fi.sh abd
/usr/local/bin/if_then_elif_fi.sh: line 2: [: abd: integer expression expected
/usr/local/bin/if_then_elif_fi.sh: line 6: [: abd: integer expression expected
/usr/local/bin/if_then_elif_fi.sh: line 10: [: abd: integer expression expected
abd is not an integer value. Please supply an i
nteger.>)

[root@server30 ~]# echo $?
4

Script10: The if-then-elif-fi Construct (Example 2)

Create ex200_ex294.sh to display the name of the Red Hat exam RHCSA or RHCE in the output based on the input argument (ex200 or ex294). If a random or no argument is provided, it will print “Usage: Acceptable values are ex200 and ex294”. Add white spaces in the conditions.

[root@server30 ~]# vim /usr/local/bin/ex200_ex294.sh
#!/bin/bash
if [ "$1" = ex200 ]
then
        echo "RHCSA"
elif [ "$1" = ex294 ]
then
        echo "RHCE"
else
        echo "Usage: Acceptable values are ex200 and ex294"
fi
[root@server30 ~]# chmod +x /usr/local/bin/ex200_ex294.sh

[root@server30 ~]# ex200_ex294.sh ex200
RHCSA

[root@server30 ~]# ex200_ex294.sh ex294
RHCE

[root@server30 ~]# ex200_ex294.sh frog
Usage: Acceptable values are ex200 and ex294

Looping Constructs

  • Perform certain task on a number of given elements.
  • Or repeatedly until a specified condition becomes true or false.
  • Examples:
    • if plenty of disks need to be initialized for use in LVM, you can either run the pvcreate command on each disk one at a time manually or employ a loop to do it for you.
    • Based on a condition, you may want a program to continue to run until that condition becomes true or false.

Three looping constructs: for-do-done

  • for loop is also referred to as the foreach loop.
  • iterates on a list of given values until the list is exhausted. while-do-done
  • while loop runs repeatedly until the specified condition becomes false. until-do-done
  • until loop does just the opposite of the while loop
  • Performs an operation repeatedly until the specified condition becomes true.

Test Conditions

let command

  • Used in looping constructs to evaluate a condition at each iteration.
  • Compares the value stored in a variable against a pre-defined value
  • Each time the loop does an iteration, the variable value is altered.
  • Can enclose the condition for arithmetic evaluation within a pair of parentheses (( )) or quotation marks (" “) instead of using the let command explicitly.

Operators used in test conditions

!

  • Negation

+ / – / * / /

  • Addition / subtraction /multiplication / division

%

  • Remainder

< / <=

  • Less than / less than or equal to

> / >=

  • Greater than / greater than or equal to

=

  • Assignment

== / !=

  • Comparison for equality /non-equality

The for Loop

  • Executed on an array of elements until all the elements in the array are consumed.
  • Each element is assigned to a variable one after the other for processing.

The general syntax of this construct is as follows:

for VAR in list > do > action > done

Script11: Print Alphabets Using for Loop

Create script for_do_done.sh script that initializes the variable COUNT to 0. The for loop will read each letter sequentially from the range placed within curly brackets (no spaces before the letter A and after the letter Z), assign it to another variable LETTER, and display the value on the screen. The expr command is an arithmetic processor, and it is used here to increment the COUNT by 1 at each loop iteration.

[root@server10 ~]# vim /usr/local/bin/for_do_done.sh
#!/bin/bash
COUNT=0
for LETTER in {A..Z}
do
        COUNT=`/usr/bin/expr $COUNT + 1`
        echo "Letter $COUNT is [$LETTER]"
done
[root@server10 ~]# chmod +x /usr/local/bin/for_do_done.sh
[root@server10 ~]# for_do_done.sh
Letter 1 is [A]
Letter 2 is [B]
Letter 3 is [C]
Letter 4 is [D]
Letter 5 is [E]
Letter 6 is [F]
Letter 7 is [G]
Letter 8 is [H]
Letter 9 is [I]
Letter 10 is [J]
Letter 11 is [K]
Letter 12 is [L]
Letter 13 is [M]
Letter 14 is [N]
Letter 15 is [O]
Letter 16 is [P]
Letter 17 is [Q]
Letter 18 is [R]
Letter 19 is [S]
Letter 20 is [T]
Letter 21 is [U]
Letter 22 is [V]
Letter 23 is [W]
Letter 24 is [X]
Letter 25 is [Y]
Letter 26 is [Z]

Script12: Create Users Using for Loop

Create script create_user.sh script to create several Linux user accounts. As each account is created, the value of the variable ? is checked. If the value is 0, a message saying the account is created successfully will be displayed, otherwise the script will terminate. In case of a successful account creation, the passwd command will be invoked to assign the user the same password as their username.

[root@server10 ~]# vim /usr/local/bin/create_user.sh
#!/bin/bash

for USER in user{10..12}
do
        echo "Create account for user $USER"
        /usr/sbin/useradd $USER
if [ $? = 0 ]
then
        echo $USER | /usr/bin/passwd --stdin $USER
        echo "$USER is created successfully"
else
        echo "Failed to create account $USER"
exit
fi
done
[root@server10 ~]# chmod +x /usr/local/bin/create_user.sh

[root@server10 ~]# create_user.sh
Create account for user user10
Changing password for user user10.
passwd: all authentication tokens updated successfully.
user10 is created successfully
Create account for user user11
Changing password for user user11.
passwd: all authentication tokens updated successfully.
user11 is created successfully
Create account for user user12
Changing password for user user12.
passwd: all authentication tokens updated successfully.
user12 is created successfully

Script fails if ran again:

[root@server10 ~]# create_user.sh
Create account for user user10
useradd: user 'user10' already exists
Failed to create account user10

Shell Scripting DIY Labs

Lab: Write Script to Create Logical Volumes

  • Present 2x1GB virtual disks to server40 in VirtualBox Manager.
  • As user1 with sudo on server40, write a single bash script to create 2x400MB partitions on each disk using parted and then bring both partitions into LVM control with the pvcreate command.
 vim /usr/local/bin/lvscript.sh
  • Create a volume group called vgscript and add both physical volumes to it.
  • Create three logical volumes each of size 200MB and name them lvscript1, lvscript2, and lvscript3.
 #!/bin/bash
 for DEVICE in "/dev/sd"{b..c}
 do
        echo "Creating partition 1 with the size of 400MB on $DEVICE"
        parted $DEVICE mklabel msdos
        parted $DEVICE mkpart primary 1 401
        pvcreate $DEVICE[1]

        echo "Creating partition 2 with the size of 400MB on $DEVICE"
        parted $DEVICE mkpart primary 402 802
        pvcreate $DEVICE[2]
        vgcreate vgscript $DEVICE[1] $DEVICE[2]
 done
 for LV in "lvscript"{1..3}
 do
        echo "Creating logical volume $LV in volume group vgscript with the size of 200MB"
        lvcreate vgscript -L 200MB -n $LV
 done

Lab: Write Script to Create File Systems

  • Write another bash script to create xfs, ext4, and vfat file system structures in the logical volumes, respectively.
  • Create mount points /mnt/xfs, /mnt/ext4, and /mnt/vfat, and mount the file systems.
  • Include the df command with -h in the script to list the mounted file systems.
 vim /usr/local/bin/fsscript.sh
 [root@server40 ~]# chmod +x /usr/local/bin/fsscript.sh
 #!/bin/bash
 for DEVICE in lvscript{1..3}
 do
 if [ "$DEVICE" = lvscript1 ]
 then
        echo "Creating xfs filesystem on logical volume lvscript1"
        echo
        mkfs.xfs /dev/vgscript/lvscript1
        echo "Creating /mnt/xfs"
        mkdir /mnt/xfs
        echo "Mounting filesystem"
        mount /dev/vgscript/lvscript1 /mnt/xfs
 elif [ "$DEVICE" = lvscript2 ]
 then    
        echo "Creating ext4 filesystem on logical volume lvscript2"
        echo
        mkfs.ext4 /dev/vgscript/lvscript2
        echo "Creating /mnt/ext4"
        mkdir /mnt/ext4
        echo "Mounting filesystem"
        mount /dev/vgscript/lvscript2 /mnt/ext4

 elif [ "$DEVICE" = lvscript3 ]
 then    
        echo "Creating vfat filesystem on logical volume lvscript3"
        echo
        mkfs.vfat /dev/vgscript/lvscript3
        echo "Creating /mnt/vfat"
        mkdir /mnt/vfat
        echo "Mounting filesystem"
        mount /dev/vgscript/lvscript3 /mnt/vfat
        echo
        echo
        echo "Done!"
                df -h
 else
        echo

 fi
 done
 [root@server40 ~]# fsscript.sh
 Creating xfs filesystem on logical volume lvscript1

 Filesystem should be larger than 300MB.
 Log size should be at least 64MB.
 Support for filesystems like this one is deprecated and they will not be supported in future releases.
 meta-data=/dev/vgscript/lvscript1 isize=512    agcount=4,  agsize=12800 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1,  rmapbt=0
         =                       reflink=1    bigtime=1 inobtcount=1  nrext64=0
 data     =                       bsize=4096   blocks=51200, imaxpct=25
         =                       sunit=0      swidth=0 blks
 naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
 log      =internal log           bsize=4096   blocks=1368, version=2
         =                       sectsz=512   sunit=0 blks, lazy- count=1
 realtime =none                   extsz=4096   blocks=0, rtextents=0
 Creating /mnt/xfs
 Mounting filesystem
 Creating ext4 filesystem on logical volume lvscript2

 mke2fs 1.46.5 (30-Dec-2021)
 Creating filesystem with 204800 1k blocks and 51200 inodes
 Filesystem UUID: b16383bf-7b65-4a00-bb6d-c297733f60b3
 Superblock backups stored on blocks: 
	8193, 24577, 40961, 57345, 73729

 Allocating group tables: done                            
 Writing inode tables: done                            
 Creating journal (4096 blocks): done
 Writing superblocks and filesystem accounting information: done 

 Creating /mnt/ext4
 Mounting filesystem
 Creating vfat filesystem on logical volume lvscript3

 mkfs.fat 4.2 (2021-01-31)
 Creating /mnt/vfat
 Mounting filesystem


 Done!

Lab 21-3: Write Script to Configure New Network Connection Profile

  • Present a new network interface to server40 in VirtualBox Manager.
  • As user1 with sudo on server40, write a single bash script to run the nmcli command to configure custom IP assignments (choose your own settings) on the new network device.
  • Make a copy of the /etc/hosts file as part of this script.
  • Choose a hostname of your choice and add a mapping to the /etc/hosts file without overwriting existing file content.
 [root@server40 ~]# vim /usr/local/bin/network.sh
 #!/bin/bash
 cp /etc/hosts /etc/hosts.bak &&
 nmcli c a type Ethernet con-name enp0s9 ifname enp0s9 ip4  10.32.32.2/24 gw4 10.32.32.1
 echo "10.32.33.14 frog.example.com frog" >> /etc/hosts
 [root@server40 ~]# chmod +x /usr/local/bin/network.sh
 [root@server40 ~]# network.sh
 Connection 'enp0s9' (5a342243-e77b-452e-88e2-8838d3ecea6d)  successfully added.
 [root@server40 ~]# cat /etc/hosts
 127.0.0.1   localhost localhost.localdomain localhost4  localhost4.localdomain4
 ::1         localhost localhost.localdomain localhost6  localhost6.localdomain6
 10.32.33.14 frog.example.com frog
 [root@server40 ~]# ip a
 enp0s9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel  state UP group default qlen 1000
    link/ether 08:00:27:1d:f4:c1 brd ff:ff:ff:ff:ff:ff
    inet 10.32.32.2/24 brd 10.32.32.255 scope global noprefixroute enp0s9
       valid_lft forever preferred_lft forever
    inet6 fe80::2c5d:31cc:1d79:6b43/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
 [root@server40 ~]# nmcli d s
 DEVICE  TYPE      STATE                   CONNECTION 
 enp0s3  ethernet  connected               enp0s3     
 enp0s8  ethernet  connected               enp0s8     
 enp0s9  ethernet  connected               enp0s9     
 lo      loopback  connected (externally)  lo  

Subsections of Boot

Boot Process, Grub2, and Kernel

Linux Kernel

  • controls everything on the system.
    • hardware
    • enforces security and access controls
    • runs, schedules, and manages processes and service daemons.
  • comprised of several modules.
  • new kernel must be installed or an existing kernel must be upgraded when the need arises from an application or functionality standpoint.
  • core of the Linux system.
  • manages
  • hardware
  • enforces security
  • regulates access to the system

handles

  • processes

  • services

  • application workloads.

  • collection of software components called modules

    • Modules
      • device drivers that control hardware devices
        • processor
        • memory
        • storage
        • controller cards
        • peripheral equipment
      • interact with software subsystems
        • storage partitioning
        • file systems
        • networking
        • virtualization
  • Some modules are static to the kernel and are integral to system functionality,

  • Some modules are loaded dynamically as needed

  • RHEL 8.0 and RHEL 8.2 are shipped with kernel version 4.18.0 (4.18.0-80 and 4.18.0-193 to be specific) for the 64-bit Intel/AMD processor architecture computers with single, multi-core, and multi-processor configurations.

  • uname -m shows the architecture of the system.

  • Kernel requires a rebuild when a new functionality is added or removed.

  • functionality may be introduced by:

    • installing a new kernel
    • upgrading an existing one
    • installing a new hardware device, or
    • changing a critical system component.
  • existing functionality that is no longer needed may be removed to make the overall footprint of the kernel smaller for improved performance and reduced memory utilization.

  • tunable parameters are set that define a baseline for kernel functionality.

  • Some parameters must be tuned for some applications and database software to be installed smoothly and operate properly.

  • You can generate and store several custom kernels with varied configuration and required modules

  • only one of them can be active at a time.

  • different kernel may be loaded by interacting with GRUB2.

Kernel Packages

  • set of core kernel packages that must be installed on the system at a minimum to make it work.
  • Additional packages providing supplementary kernel support are also available.

Core and some add-on kernel packages.

Kernel Package Description
kernel Contains no files, but ensures other kernel packages are accurately installed
kernel-core Includes a minimal number of modules to provide core functionality
kernel-devel Includes support for building kernel modules
kernel-modules Contains modules for common hardware devices
kernel-modules-extra Contains modules for not-so-common hardware devices
kernel-headers Includes files to support the interface between the kernel and userspace
kernel-tools-libs Includes the libraries to support the kernel tools
libraries and programs kernel-tools Includes tools to manipulate the kernel

Kernel Packages

  • Packages containing the source code for RHEL 8 are also available for those who wish to customize and recompile the code

List kernel packages installed on the system:

 dnf list installed kernel*
  • Shows six kernel packages that were loaded during the OS installation.

Analyzing Kernel Version

Check the version of the kernel running on the system to check for compatibility with an application or database:

 uname -r
 5.14.0-362.24.1.el9_3.x86_64

5 - Major version 14 - Major revision 0 - Kernel patch version 362 - Red Hat version el9 - Enterprise Linux 9 x86_64 - Processor architecture

Kernel Directory Structure

Kernel and its support files (noteworthy locations)

  • /boot
  • /proc
  • /usr/lib/modules

/boot

  • Created at system installation.
  • Linux kernel
  • GRUB2 configuration
  • other kernel and boot support files.

View the /boot filesystem: ls -l /boot

  • four files are for the kernel and
    • vmlinuz - main kernel file
    • initramfs - main kernel’s boot image
    • config - configuration
    • System.map - mapping
  • two files for kernel rescue version
    • Have the current kernel version appended to their names.
    • have the string “rescue” embedded within their names

/boot/efi/ and /boot/grub2/

  • hold bootloader information specific to firmware type used on the system: UEFI or BIOS.

List /boot/Grub2:

 [root@localhost ~]# ls -l /boot/grub2
 total 32
 -rw-r--r--. 1 root root   64 Feb 25 05:13 device.map
 drwxr-xr-x. 2 root root   25 Feb 25 05:13 fonts
 -rw-------. 1 root root 7049 Mar 21 04:47 grub.cfg
 -rw-------. 1 root root 1024 Mar 21 05:12 grubenv
 drwxr-xr-x. 2 root root 8192 Feb 25 05:13 i386-pc
 drwxr-xr-x. 2 root root 4096 Feb 25 05:13 locale
  • grub.cfg
    • bootable kernel information
  • grub.env
    • environment information that the kernel uses.

/boot/loader

  • storage location for configuration of the running and rescue kernels.
  • Configuration is stored in files under the /boot/loader/entries/
 [root@localhost ~]# ls -l /boot/loader/entries/
 total 12
 -rw-r--r--. 1 root root 484 Feb 25 05:13  8215ac7e45d34823b4dce2e258c3cc47-0-rescue.conf
 -rw-r--r--. 1 root root 460 Mar 16 06:17  8215ac7e45d34823b4dce2e258c3cc47-5.14.0-362.18.1.el9_3.x86_64.conf
 -rw-r--r--. 1 root root 459 Mar 16 06:17  8215ac7e45d34823b4dce2e258c3cc47-5.14.0-362.24.1.el9_3.x86_64.conf
  • The files are named using the machine id of the system as stored in /etc/machine-id/ and the kernel version they are for.

content of the kernel file:

 [root@localhost entries]# cat  /boot/loader/entries/8215ac7e45d34823b4dce2e258c3cc47-5.14.0- 362.18.1.el9_3.x86_64.conf
 title Red Hat Enterprise Linux (5.14.0-362.18.1.el9_3.x86_64) 9.3  (Plow)
 version 5.14.0-362.18.1.el9_3.x86_64
 linux /vmlinuz-5.14.0-362.18.1.el9_3.x86_64
 initrd /initramfs-5.14.0-362.18.1.el9_3.x86_64.img $tuned_initrd
 options root=/dev/mapper/rhel-root ro crashkernel=1G-4G:192M,4G- 64G:256M,64G-:512M resume=/dev/mapper/rhel-swap rd.lvm.lv=rhel/root  rd.lvm.lv=rhel/swap rhgb quiet  $tuned_params
 grub_users $grub_users
 grub_arg --unrestricted
 grub_class rhel
  • “title” is displayed on the bootloader screen
  • “kernelopts” and “tuned_params” supply values to the booting kernel to control its behavior.

/proc

  • Virtual, memory-based file system
  • contents are created and updated in memory at system boot and during runtime
  • destroyed at system shutdown
  • current state of the kernel, which includes
    • hardware configuration
    • status information
      • processor
      • memory
      • storage
      • file systems
      • swap
      • processes
      • network interfaces
      • connections
      • routing
      • etc.
  • Data kept in tens of thousands of zero-byte files organized in a hierarchy.

List /proc: ls -l /proc

  • numerical subdirectories contain information about a specific process
    • process ID matches the subdirectory name.
  • other files and subdirectories contain information, such as
    • memory segments for processes and
    • configuration data for system components.
    • can view the configuration in vim

Show selections from the cpuinfo and meminfo files that hold processor and memory information: cat/proc/cpuinfo && cat /proc/meminfo

  • data used by top, ps, uname, free, uptime and w, to display information.

/usr/lib/modules/

  • holds information about kernel modules.
  • subdirectories are specific to the kernels installed on the system.

Long listing of /usr/lib/modules/ shows two installed kernels:

 [root@localhost entries]# ls -l /usr/lib/modules
 total 8
 drwxr-xr-x. 7 root root 4096 Mar 16 06:18 5.14.0-362.18.1.el9_3.x86_64
 drwxr-xr-x. 8 root root 4096 Mar 16 06:18 5.14.0-362.24.1.el9_3.x86_64

View /usr/lib/modules/5.14.0-362.18.1.el9_3.x86_64/:

 ls -l /usr/lib/modules/5.14.0-362.18.1.el9_3.x86_64
  • Subdirectories hold module-specific information for the kernel version.

/lib/modules/4.18.0-80.el8.x86_64/kernel/drivers/

  • stores modules for a variety of hardware and software components in various subdirectories:
 ls -l /usr/lib/modules/5.14.0-362.18.1.el9_3.x86_64/kernel/drivers
  • Additional modules may be installed on the system to support more components.

Installing the Kernel

  • requires extra care

  • could leave your system in an unbootable or undesirable state.

  • have the bootable medium handy prior to starting the kernel install process.

  • By default, the dnf command adds a new kernel to the system, leaving the existing kernel(s) intact. It does not replace or overwrite existing kernel files.

  • Always install a new version of the kernel instead of upgrading it.

  • The upgrade process removes any existing kernel and replaces it with a new one.

  • In case of a post-installation issue, you will not be able to revert to the old working kernel.

  • Newer version of the kernel is typically required:

    • if an application needs to be deployed on the system that requires a different kernel to operate.
    • When deficiencies or bugs are identified in the existing kernel, it can hamper the kernel’s smooth operation.
  • new kernel

    • addresses existing issues
    • adds bug fixes
    • security updates
    • new features
    • improved support for hardware devices.
  • dnf is the preferred tool to install a kernel

  • it resolves and installs any required dependencies automatically.

  • rpm may be used but you must install any dependencies manually.

  • Kernel packages for RHEL are available to subscribers on Red Hat’s Customer Portal.

Linux Boot Process

Multiple phases during the boot process.

  • Starts selective services during its transition from one phase into another.
  • Presents the administrator an opportunity to interact with a preboot program to boot the system into a non-default target.
  • Pass an option to the kernel.
  • Reset the lost or forgotten root user password.
  • Launches a number of services during its transition to the default or specified target.
  • boot process after the system has been powered up or restarted.
  • lasts until all enabled services are started.
  • login prompt will appear on the screen
  • boot process is automatic, but you
    • may need to interact with it to take a non-default action, such as
      • booting an alternative kernel
      • booting into a non-default operational state
      • repairing the system
      • recovering from an unbootable state boot process on an x86 computer may be split into four major phases: (1) the firmware phase (2) the bootloader phase (3) the kernel phase (4) the initialization phase.

The system accomplishes these phases one after the other while performing and attempting to complete the tasks identified in each phase.

The Firmware Phase (BIOS and UEFI)

firmware:

  • BIOS (Basic Input/Output System) or the UEFI (Unified Extensible Firmware Interface) code that is stored in flash memory on the x86-based system board.
  • runs the Power-On-Self-Test (POST) to detect, test, and initialize the system hardware components.
  • Installs appropriate drivers for the video hardware
  • exhibits system messages on the screen.
  • scans available storage devices to locate a boot device,
    • starting with a 512-byte image that contains
      • 446 bytes of the bootloader program,
      • 64 bytes for the partition table
      • last two bytes with the boot signature.
      • referred to as the Master Boot Record (MBR)
      • located on the first sector of the boot disk.
      • As soon as it discovers a usable boot device, it loads the bootloader into memory and passes control over to it.

BIOS

  • small memory chip in the computer that stores
    • system date and time,
    • list and sequence of boot devices,
    • I/O configuration,
    • etc.
  • configuration is customizable.
  • hardware initialization phase
    • detecting and diagnosing peripheral devices.
    • runs the POST on the devices as it finds them
    • installs drivers for the graphics card and the attached monitor
    • begins exhibiting system messages on the video hardware.
    • discovers a usable boot device
    • loads the bootloader program into memory, and passes control over to it.

UEFI

  • new 32/64-bit architecture-independent specification replacing BIOS.
  • delivers enhanced boot and runtime services
  • superior features such as speed over the legacy 16-bit BIOS.
  • has its own device drivers
  • able to mount and read extended file systems
  • includes UEFI-compliant application tools
  • supports one or more bootloader programs.
  • comes with a boot manager that allows you to choose an alternative boot source.

Bootloader Phase

  • Once the firmware phase is over and a boot device is detected,
  • system loads a piece of software called bootloader that is located in the boot sector of the boot device.
  • RHEL uses GRUB2 (GRand Unified Bootloader) version 2 as the bootloader program. GRUB2 supports both BIOS and UEFI firmware.

The primary job of the bootloader program is to

  • spot the Linux kernel code in the /boot file system
  • decompress it
  • load it into memory based on the configuration defined in the /boot/grub2/grub.cfg file
  • transfer control over to it to further the boot process.

UEFI-based systems,

  • GRUB2 looks for the EFI system partition /boot/efi instead
  • Runs the kernel based on the configuration defined in the /boot/efi/EFI/redhat/grub.efi file.

Kernel Phase

  • kernel is the central program of the operating system, providing access to hardware and system services.
  • After getting control from the bootloader, the kernel:
    • extracts the initial RAM disk (initrd) file system image found in the /boot file system into memory,

    • decompresses it

    • mounts it as read-only on /sysroot to serve as the temporary root file system

    • loads necessary modules from the initrd image to allow access to the physical disks and the partitions and file systems therein.

    • loads any required drivers to support the boot process.

    • Later, it unmounts the initrd image and mounts the actual physical root file system on / in read/write mode.

    • At this point, the necessary foundation has been built for the boot process to carry on and to start loading the enabled services.

    • kernel executes the systemd process with PID 1 and passes the control over to it.

Initialization Phase

  • fourth and the last phase in the boot process.

  • Systemd:

  • takes control from the kernel and continues the boot process.

  • is the default system initialization scheme used in RHEL 9.

  • starts all enabled userspace system and network services

  • Brings the system up to the preset boot target.

  • A boot target is an operational level that is achieved after a series of services have been started to get to that state.

  • system boot process is considered complete when all enabled services are operational for the boot target and users are able to log in to the system

GRUB2 Bootloader

  • After the firmware phase has concluded:
  • Bootloader presents a menu with a list of bootable kernels available on the system
  • Waits for a predefined amount of time before it times out and boots the default kernel.
  • You may want to interact with GRUB2 before the autoboot times out to boot with a non-default kernel boot to a different target, or customize the kernel boot string.
  • Press a key before the timeout expires to interrupt the autoboot process and interact with GRUB2.
  • autoboot countdown default value is 5 seconds.

Interacting with GRUB2

  • GRUB2 main menu shows a list of bootable kernels at the top.
  • Edit a selected kernel menu entry by pressing an e or go to the grub> command prompt by pressing a c.

edit mode,

  • GRUB2 loads the configuration for the selected kernel entry from the /boot/grub2/grub.cfg file in an editor
  • enables you to make a desired modification before booting the system.
  • you can boot the system into a less capable operating target by adding “rescue”, “emergency”, or “3” to the end of the line that begins with the keyword “linux”,
  • Press Ctrl+x when done to boot.
  • one-time temporary change and it won’t touch the grub.cfg file.
  • press ESC to discard the changes and return to the main menu.
  • grub> command prompt appears when you press Ctrl+c while in the edit window
  • or a c from the main menu.
  • command mode: execute debugging, recovery, etc.
  • view available commands by pressing the TAB key.

GRUB2 Commands

Understanding GRUB2 Configuration Files

/boot/grub2/grub.cfg

  • Referenced at boot time.
  • Generated automatically when a new kernel is installed or upgraded
  • not advisable to modify it directly, as your changes will be overwritten. :

/etc/default/grub

  • primary source file that is used to regenerate grub.cfg.
  • Defines the directives that govern how GRUB2 should behave at boot time.
  • Any changes made to the grub file will only take effect after the grub2-mkconfig utility has been executed
  • Defines the directives that control the behavior of GRUB2 at boot time.
  • Any changes in this file must be followed by the execution of the grub2-mkconfig command in order to be reflected in grub.cfg.

Default settings:

 [root@localhost default]# nl /etc/default/grub
     1	GRUB_TIMEOUT=5
     2	GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
     3	GRUB_DEFAULT=saved
     4	GRUB_DISABLE_SUBMENU=true
     5	GRUB_TERMINAL_OUTPUT="console"
     6	GRUB_CMDLINE_LINUX="crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M resume=/dev/mapper/rhel-swap rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap rhgb quiet"
     7	GRUB_DISABLE_RECOVERY="true"
     8	GRUB_ENABLE_BLSCFG=true
Directive Description
GRUB_TIMEOUT Wait time, in seconds, before booting off the default kernel. Default is 5.
GRUB_DISTRIBUTOR Name of the Linux distribution
GRUB_DEFAULT Boots the selected option from the previous system boot
GRUB_DISABLE_SUBMENU Enables/disables the appearance of GRUB2 submenu
GRUB_TERMINAL_OUTPUT Sets the default terminal
GRUB_CMDLINE_LINUX Specifies the command line options to pass to the kernel at boot time
GRUB_DISABLE_RECOVERY Lists/hides system recovery entries in the GRUB2 menu
GRUB_ENABLE_BLSCFG Defines whether to use the new bootloader specification to manage bootloader configuration
  • Default settings are good enough for normal system operation.

/boot/grub2/grub.cfg - /boot/efi/EFI/redhat/grub.cfg

  • Main GRUB2 configuration file that supplies boot-time configuration information.
  • located in the /boot/grub2/ on BIOS-based systems
  • /boot/efi/EFI/redhat/ on UEFI-based systems.
  • can be recreated manually with the grub2-mkconfig utility
  • automatically regenerated when a new kernel is installed or upgraded.
  • file will lose any previous manual changes made to it.

grub2-mkconfig command

  • Uses the settings defined in helper scripts located in the /etc/grub.d directory.
 [root@localhost default]# ls -l /etc/grub.d
 total 104
 -rwxr-xr-x. 1 root root  9346 Jan  9 09:51 00_header
 -rwxr-xr-x. 1 root root  1046 Aug 29  2023 00_tuned
 -rwxr-xr-x. 1 root root   236 Jan  9 09:51 01_users
 -rwxr-xr-x. 1 root root   835 Jan  9 09:51  08_fallback_counting
 -rwxr-xr-x. 1 root root 19665 Jan  9 09:51 10_linux
 -rwxr-xr-x. 1 root root   833 Jan  9 09:51  10_reset_boot_success
 -rwxr-xr-x. 1 root root   892 Jan  9 09:51  12_menu_auto_hide
 -rwxr-xr-x. 1 root root   410 Jan  9 09:51  14_menu_show_once
 -rwxr-xr-x. 1 root root 13613 Jan  9 09:51  20_linux_xen
 -rwxr-xr-x. 1 root root  2562 Jan  9 09:51  20_ppc_terminfo
 -rwxr-xr-x. 1 root root 10869 Jan  9 09:51 30_os- prober
 -rwxr-xr-x. 1 root root  1122 Jan  9 09:51 30_uefi- firmware
 -rwxr-xr-x. 1 root root   218 Jan  9 09:51 40_custom
 -rwxr-xr-x. 1 root root   219 Jan  9 09:51 41_custom
 -rw-r--r--. 1 root root   483 Jan  9 09:51 README

00_header

  • sets the GRUB2 environment 10_linux
  • searches for all installed kernels on the same disk partition 30_os-prober
  • searches for the presence of other operating systems 40_custom and 41_custom are to
  • introduce any customization.
  • like add custom entries to the boot menu.

grub.cfg file

  • Sources /boot/grub2/grubenv for kernel options and other settings.
 [root@localhost grub2]# cat grubenv
 # GRUB Environment Block
 # WARNING: Do not edit this file by tools other than grub-editenv!!!
 saved_entry=8215ac7e45d34823b4dce2e258c3cc47-5.14.0- 362.24.1.el9_3.x86_64
 menu_auto_hide=1
 boot_success=0
 boot_indeterminate=0
 ############################################################################
 ##################################################### #######################

If a new kernel is installed:

  • the existing kernel entries remain intact.
  • All bootable kernels are listed in the GRUB2 menu
  • any of the kernel entries can be selected to boot.

Lab: Change Default System Boot Timeout

  • change the default system boot timeout value to 8 seconds persistently, and validate.
  1. Edit the /etc/default/grub file and change the setting as follows: `GRUB_TIMEOUT=8

  2. Execute the grub2-mkconfig command to reproduce grub.cfg:

grub2-mkconfig -o /boot/grub2/grub.cfg

3.Restart the system with sudo reboot and confirm the new timeout value when GRUB2 menu appears.

Booting into Specific Targets

RHEL

  • boots into graphical target state by default if the Server with GUI software selection is made during installation.

  • can also be directed to boot into non-default but less capable operating targets from the GRUB2 menu.

  • offers emergency and rescue boot targets.

    • special target levels can be launched from the GRUB2 interface by
      • selecting a kernel
      • pressing e to enter the edit mode
      • appending the desired target name to the line that begins with the keyword “linux”.
      • Press ctrl+x to boot into the supplied target
      • Enter root password
      • reboot when you are done
  • You must know how to boot a RHEL 9 system into a specific target from the GRUB2 menu to modify the fstab file or reset an unknown root user password.

Append “emergency” to the kernel line entry:

Other options:

  • “rescue”
  • “1”
  • “s”
  • “single”

Reset the root User Password

  • Terminate the boot process at an early stage to be placed in a special debug shell in order to reset the root password.
  1. Reboot or reset server1, and interact with GRUB2 by pressing a key before the autoboot times out. Highlight the default kernel entry in the GRUB2 menu and press e to enter the edit mode. Scroll down to the line entry that begins with the keyword “linux” and press the End key to go to the end of that line:

  2. Modify this kernel string and append “rd.break” to the end of the line.

  3. Press Ctrl+x when done to boot to the special shell. The system mounts the root file system read-only on the /sysroot directory. Make /sysroot appear as mounted on / using the chroot command:

 chroot sysroot

3. Remount the root file system in read/write mode for the passwd command to be able to modify the shadow file with a new password:

 mount -o remount,rw /
  1. Enter a new password for root by invoking the passwd command:
 passwd
  1. Create a hidden file called .autorelabel to instruct the operating system to run SELinux relabeling on all files, including the shadow file that was updated with the new root password, on the next reboot:
 touch .autorelabel
  1. Issue the exit command to quit the chroot shell and then the reboot command to restart the system and boot it to the default target.
 exit
 reboot

Second method

Look into using init=/bin/bash for password recovery as a second method.

Boot Grub2 Kernel Labs

Lab: Enable Verbose System Boot

  • Remove “quiet” from the end of the value of the variable GRUB_CMDLINE_LINUX in the /etc/default/grub file
  • Run grub2-mkconfig to apply the update.
  • Reboot the system and observe that the system now displays verbose information during the boot process.

Lab: Reset root User Password

  • Reset the root user password by booting the system into emergency mode with SELinux disabled.
  • Try to log in with root and enter the new password after the reboot.

Lab: Install New Kernel

  • Check the current version of the kernel using the uname or rpm command.
  • Download a higher version from the Red Hat Customer Portal or rpmfind.net and install it.
  • Reboot the system and ensure the new kernel is listed on the bootloader menu. 5.14.0-427.35.1.el9_4.x86_64

Lab: Download and Install a New Kernel

  • download the latest available kernel packages from the Red Hat Customer Portal
  • install them using the dnf command.
  • ensure that the existing kernel and its configuration remain intact.
  • As an alternative (preferred) to downloading kernel packages individually and then installing them, you can follow the instructions provided in “Containers” chapter to register server1 with RHSM and run sudo dnf install kernel to install the latest kernel and all the dependencies collectively.
  1. Check the version of the running kernel: uname -r

  2. List the kernel packages currently installed: rpm -qa | grep kernel

  3. Sign in to the Red Hat Customer Portaland click downloads.

  4. Click “Red Hat Enterprise Linux 8” under “By Category”:

  5. Click Packages and enter “kernel” in the Search bar to narrow the list of available packages:

  6. Click “Download Latest” against the packages kernel, kernel-core, kernel-headers, kernel-modules, kernel-tools, and kernel-tools-libs to download them.

  7. Once downloaded, move the packages to the /tmp directory using the mv command.

  8. List the packages after moving them:

  9. Install all the six packages at once using the dnf command: dnf install /tmp/kernel* -y

  10. Confirm the installation alongside the previous version: sudo dnf list installed kernel*

  11. The /boot/grub2/grubenv/ file now has the directive “saved_entry” set to the new kernel, which implies that this new kernel will boot up on the next system restart: sudo cat /boot/grub2/grubenv

  12. Reboot the system. You will see the new kernel entry in the GRUB2 boot list at the top. The system will autoboot this new default kernel.

  13. Run the uname command once the system has been booted up to confirm the loading of the new kernel: uname -r

  14. View the contents of the version and cmdline files under /proc to verify the active kernel: `cat /proc/version

Or just dnf install kernel

System Initialization, Message Logging, and System Tuning

System Initialization and Service Management

systemd (system daemon)

  • System initialization and service management mechanism.

  • Units and targets for initialization, service administration, and state changes

  • Has fast-tracked system initialization and state transitioning by introducing:

    • Parallel processing of startup scripts
    • Improved handling of service dependencies
    • On-demand activation of services
  • Supports snapshotting of system states.

  • Used to handle operational states of services

  • Boots the system into one of several predefined targets

  • Tracks processes using control groups

  • Automatically maintains mount points.

  • First process with PID 1 that spawns at boot

  • Last process that terminates at shutdown.

  • Spawns several processes during a service startup.

  • Places the processes in a private hierarchy composed of control groups (or cgroups for short) to organize processes for the purposes of monitoring and controlling system resources such as:

    • processor
    • memory
    • network bandwidth
    • disk I/O
  • Limit, isolate, and prioritize process usage of resources.

  • Resources distributed among users, databases, and applications based on need and priority

  • Initiates distinct services concurrently, taking advantage of multiple CPU cores and other compute resources.

  • Creates sockets for all enabled services that support socket-based activation at the very beginning of the initialization process.

  • It passes them on to service daemon processes as they attempt to start in parallel.

  • This lets systemd handle inter-service order dependencies

  • Allows services to start without any delays.

  • Systemd creates sockets first, starts daemons next, and caches any client requests to daemons that have not yet started in the socket buffer.

  • Files the pending client requests when the daemons they were awaiting come online.

Socket

  • Communication method that allows a single process to talk to another process on the same or remote system.

During the operational state, systemd:

  • maintains the sockets and uses them to reconnect other daemons and services that were interacting with an old instance of a daemon before that daemon was terminated or restarted.
  • services that use activation based on D-Bus (Desktop Bus) are started when a client application attempts to communicate with them for the first time.
  • Additional methods used by systemd for activation are
    • device-based
      • starting the service when a specific hardware type such as USB is plugged in
    • path-based
      • starting the service when a particular file or directory alters its state.

D-Bus

  • Allows multiple services running in parallel on a system or remote systems to talk to one another

on-demand activation

  • systemd defers the startup of services—Bluetooth and printing—until they are actually needed.

parallelization and on-demand activation

  • save time and compute resources.
  • contribute to expediting the boot process considerably.

benefit of parallelism witnessed at system boot is

  • the file systems are checked that may result in unnecessary delays.
  • With autofs, the file systems are temporarily mounted on their normal mount points
  • as soon as the checks on the file systems are finished, systemd remounts them using their standard devices.
  • Parallelism in file system mounts does not affect the root and virtual file systems.

Units

Units

  • systemd objects used for organizing boot and maintenance tasks, such as:

    • hardware initialization
    • socket creation
    • file system mounts
    • service startups.
  • Unit configuration is stored in their respective configuration files

  • Config files are:

    • Auto-generated from other configurations
    • Created dynamically from the system state
    • Produced at runtime
    • User-developed.
  • Units operational states:

    • active
    • inactive
    • in the process of being activated
    • deactivated
    • failed.
  • Units can be enabled or disabled

    • enabled unit
      • can be started to an active state
    • disabled unit
      • cannot be started.

Units have a name and a type, and they are

  • encoded in files with names in the form unitname.type. Some
  • examples:
    • tmp.mount
    • sshd.service
    • syslog.socket
    • umount.target.

There are two types of unit configuration files:

  • System unit files
    • distributed with installed packages and located in the /usr/lib/systemd/system/
  • User unit files
    • user-defined and stored in the /etc/systemd/user/

View unit config file directories: ls -l /usr/lib/systemd/system ls -l /etc/systemd/user

pkg-config command:

  • View systemd unit config directory information: pkg-config systemd --variable=systemdsystemunitdir pkg-config systemd --variable=systemduserconfdir

  • additional system units that are created at runtime and destroyed when they are no longer needed.

    • located in /run/systemd/system/
  • runtime unit files take precedence over the system unit files

  • user unit files take priority over the runtime files.

Unit configuration files

  • direct replacement of the initialization scripts found in /etc/rc.d/init.d/ in older RHEL releases.

11 unit types

Unit Type Description
Automount automount capabilities for on-demand mounting of file systems
Device Exposes kernel devices in systemd and may be used to implement device-based activation
Mount Controls when and how to mount or unmount file systems
Path Activates a service when monitored files or directories are accessed
Scope Manages foreign processes instead of starting them
Service Starts, stops, restarts, or reloads service daemons and the processes they are made up of
Slice May be used to group units, which manage system processes in a tree-like structure for resource management
Socket Encapsulates local inter-process communication or network sockets for use by matching service units
Swap Encapsulates swap partitions
Target Defines logical grouping of units
Timer Useful for triggering activation of other units based on timers

Unit files contain common and specific configuration elements. Common elements

  • fall under the [Unit] and [Install] sections
    • description
    • documentation location
    • dependency information
    • conflict information
    • other options
  • independent of the type of unit unit-specific configuration data
  • located under the unit type section:
    • [Service] for the service unit type
    • [Socket] for the socket unit type
    • so forth

Sample unit file for sshd.service from the /usr/lib/systemd/system/:

david@fedora:~$ cat /usr/lib/systemd/system/sshd.service
[Unit]
Description=OpenSSH server daemon
Documentation=man:sshd(8) man:sshd_config(5)
After=network.target sshd-keygen.target
Wants=sshd-keygen.target

# Migration for Fedora 38 change to remove group ownership for standard host keys
# See https://fedoraproject.org/wiki/Changes/SSHKeySignSuidBit
Wants=ssh-host-keys-migration.service

[Service]
Type=notify
EnvironmentFile=-/etc/sysconfig/sshd
ExecStart=/usr/sbin/sshd -D $OPTIONS
ExecReload=/bin/kill -HUP $MAINPID
KillMode=process
Restart=on-failure
RestartSec=42s

[Install]
WantedBy=multi-user.target
  • Units can have dependencies based on a sequence (ordering) or a requirement.
    • sequence
      • outlines one or more actions that need to be taken before or after the activation of a unit (the Before and After directives).
    • requirement
      • specifies what must already be running (the Requires directive) or not running (the Conflicts directive) in order for the successful launch of a unit.

Example:

  • The graphical.target unit file tells us that the system must already be operating in the multi-user mode and not in rescue mode in order for it to boot successfully into the graphical mode.

Wants

  • May be used instead of Requires in the [Unit] or [Install] section so that the unit is not forced to fail activation if a required unit fails to start.

Run man systemd.unit for details on systemd unit files.

  • There are also other types of dependencies
  • systemd generally sets and maintains inter-service dependencies automatically
    • This can be done manually if needed.

Targets

  • logical collections of units
  • special systemd unit type with the .target file extension.
  • share the directory locations with other unit configuration files.
  • used to execute a series of units.
    • true for booting the system to a desired operational run level with all the required services up and running.
  • Some targets inherit services from other targets and add their own to them.
  • systemd includes several predefined targets
Target Description
halt Shuts down and halts the system
poweroff Shuts down and powers off the system
shutdown Shuts down the system
rescue Single-user target for running administrative and recovery functions. All local file systems are mounted. Some essential services are started, but networking remains disabled.
emergency Runs an emergency shell. The root file system is mounted in read-only mode; other file systems are not mounted. Networking and other services remain disabled.
multi-user Multi-user target with full network support, but without GUI
graphical Multi-user target with full network support and GUI
reboot Shuts down and reboots the system
default A special soft link that points to the default system boot target (multi-user.target or graphical.target)
hibernate Puts the system into hibernation by saving the running state of the system on the hard disk and powering it off. When powered up, the system restores from its saved state rather than booting up.

Systemd Targets

Target unit files

  • contain all information under the [Unit] section
    • description
    • documentation location
    • dependency and conflict information.

Show the graphical target file (/usr/lib/systemd/system/graphical.target):

root@localhost ~]# cat /usr/lib/systemd/system/graphical.target
[Unit]
Description=Graphical Interface
Documentation=man:systemd.special(7)
Requires=multi-user.target
Wants=display-manager.service
Conflicts=rescue.service rescue.target
After=multi-user.target rescue.service rescue.target display-manager.service
AllowIsolate=yes

Requires, Wants, Conflicts, and After suggests that the system must have already accomplished the rescue.service, rescue.target, multi-user.target, and display-manager.service levels in order to be declared running in the graphical target.

Run man systemd.targetfor details

systemctl Command

  • Performs administrative functions and supports plentiful subcommands and flags.
Subcommand Description
daemon-reload Re-reads and reloads all unit configuration files and recreates the entire user dependency tree.
enable (disable) Activates (deactivates) a unit for autostart at system boot
get-default (set-default) Shows (sets) the default boot target
get-property (set-property) Returns (sets) the value of a property
is-active Checks whether a unit is running
is-enabled Displays whether a unit is set to autostart at system boot
is-failed Checks whether a unit is in the failed state
isolate Changes the running state of a system
kill Terminates all processes for a unit
list-dependencies Lists dependency tree for a unit
list-sockets Lists units of type socket
list-unit-files Lists installed unit files
list-units Lists known units. This is the default behavior when systemctl is executed without any arguments.
mask (unmask) Prohibits (permits) auto and manual activation of a unit to avoid potential conflict
reload Forces a running unit to re-read its configuration file. This action does not change the PID of the running unit.
restart Stops a running unit and restarts it
show Shows unit properties
start (stop) Starts (stops) a unit
status Presents the unit status information

Listing and Viewing Units

List all units that are currently loaded in memory along with their status and description: systemctl

Output: UNIT column

  • shows the name of the unit and its location in the tree LOAD column

  • reflects whether the unit configuration file was properly loaded (loaded, not found, bad setting, error, and masked) ACTIVE column

  • returns the high-level activation state ( active, reloading, inactive, failed, activating, and deactivating) SUB column

  • depicts the low-level unit activation state (reports unit-specific information) DESCRIPTION column

  • illustrates the unit’s content and functionality.

  • systemctl only lists active units by default

--all

  • include the inactive units:

List all active and inactive units of type socket:

 systemctl -t socket --all

List all units of type socket currently loaded in memory and the service they activate, sorted by the listening address:

 systemctl list-sockets

List all unit files (column 1) installed on the system and their current state (column 2):

 systemctl list-unit-files

List all units that failed to start at the last system boot:

 systemctl --failed

List the hierarchy of all dependencies (required and wanted units) for the current default target:

 systemctl list-dependencies

List the hierarchy of all dependencies (required and wanted units) for a specific unit such as atd.service:

 systemctl list-dependencies atd.service

Managing Service Units

systemctl subcommands to manage service units, including

  • starting
  • stopping
  • restarting
  • checking status

Check the current operational status and other details for the atd service:

 systemctl status atd

Output: service description

  • read from /usr/lib/systemd/system/atd.service load status, which
  • reveals the current load status of the unit configuration file in memory.
  • Other possibilities for “Loaded” include
    • “error” (if there was a problem loading the file)
    • "not-found" (if no file associated with this unit was found)
    • "bad-setting" (if a key setting was missing)
    • "masked" (if the unit configuration file is masked)
  • (enable or disable) for autostart at system boot. Active
  • current activation status
  • time the service was started
  • Possible states:
    • Active (running): The service is running with one or more processes
    • Active (exited): Completed a one-time configuration
    • Active (waiting): Running but waiting for an event
    • Inactive: Not running
    • Activating: In the process of being activated
    • Deactivating: In the process of being deactivated
    • Failed: If the service crashed or could not be started Also includes Main PID of the service process and more.

Disable the atd service from autostarting at the next system reboot:

 sudo systemctl disable atd

Re-enable atd to autostart at the next system reboot:

 systemctl enable atd

Check whether atd is set to autostart at the next system reboot:

 systemctl is-enabled atd

Check whether the atd service is running:

 systemctl is-active atd

Stop and restart atd, run either of the following:

 systemctl stop atd ; systemctl start atd
 systemctl restart atd

Show the details of the atd service:

 systemctl show atd

Prohibit atd from being enabled or disabled:

 systemctl mask atd

Try disabling or enabling atd and observe the effect of the previous command:

 systemctl disable atd

Reverse the effect of the mask subcommand and try disable and enable operations:

 systemctl unmask atd && systemctl disable atd && systemctl enable atd

Managing Target Units

systemctl can also manage target units.

  • view or change the default boot target
  • switch from one running target into another

View what units of type target are currently loaded and active:

 systemctl -t target

output:

  • target unit’s name
  • load state
  • high-level and low-level activation states
  • short description.

–all option to the above

  • see all loaded targets in either active or inactive state.

Viewing and Setting Default Boot Target

  • view the current default boot target and to set it.
    • get-default and set-default subcommands

Check the current default boot target:

  • You may have to modify the default boot target persistently for the exam.

Change the current default boot target from graphical.target to multi-user.target:

 systemctl set-default multi-user
  • removes the existing symlink (default.target) pointing to the old boot target and replaces it with the new target file path.

revert the default boot target to graphical:

 systemctl set-default graphical

Switching into Specific Targets

  • Use systemctl to transition the running system from one target state into another.
  • graphical, multi-user, reboot, shutdown—are the most common
  • rescue and emergency targets are for troubleshooting and system recovery purposes,
  • poweroff and halt are similar to shutdown
  • hibernate is suitable for mobile devices.

Switch into multi-user using the isolate subcommand:

 systemctl isolate multi-user
  • This will stop the graphical service on the system and display the text-based console login screen.

Type in a username such as user1 and enter the password to log in:

Log in and return to the graphical target:

 systemctl isolate graphical

Shut down the system and power it off, use the following or simply run the poweroff command:

 systemctl poweroff
 poweroff

Shut down and reboot the system:

 systemctl reboot
 reboot

halt, poweroff, and reboot are symbolic links to the systemctl command:

 [root@localhost ~]# ls -l /usr/sbin/halt /usr/sbin/poweroff  /usr/sbin/reboot
 lrwxrwxrwx. 1 root root 16 Aug 22  2023 /usr/sbin/halt ->  ../bin/systemctl
 lrwxrwxrwx. 1 root root 16 Aug 22  2023 /usr/sbin/poweroff ->  ../bin/systemctl
 lrwxrwxrwx. 1 root root 16 Aug 22  2023 /usr/sbin/reboot ->  ../bin/systemctl

shutdown command options: -H now

  • Halt -P now
  • poweroff -r now
  • reboot
  • broadcasts a warning message to all logged-in users
  • blocks new user login attempts
  • waits for the specified amount of time for users to log off
  • stops the services
  • shut the system down to the specified target state.

System Logging

  • Log files need to be rotated periodically to prevent the file system space from filling up.
  • Configuration files that define the default and custom locations to direct the log messages to and to configure rotation settings. system log file
  • records custom messages sent to it.
  • systemd includes a service for viewing and managing system logs in addition to the traditional logging service.
  • This service maintains a log of runtime activities for faster retrieval and can be configured to store the information permanently.

System logging (syslog for short)

  • capture messages generated by:
    • kernel
    • daemons
    • commands
    • user activities
    • applications
    • other events
  • Forwards messages to various log files
  • For security auditing, service malfunctioning, system troubleshooting, or informational purposes.

rsyslogd daemon (rocket-fast system for log processing)

  • Responsible for system logging
  • Multi-threaded
  • support for:
    • enhanced filtering
    • encryption-protected message relaying
    • variety of configuration options.
  • Reads its configuration file /etc/rsyslog.conf and the configuration files located in /etc/rsyslog.d/ at startup.
  • /var/log
    • Default depository for most system log files
    • Other services such as audit, Apache, etc. have subdirectories here as well.

rsyslog service

  • modular
    • allows the modules listed in its configuration file to be dynamically loaded in the kernel when/as needed.
    • Each module brings a new functionality to the system upon loading.

rsyslogd daemon

  • can be stopped manually using systemctl stop rsyslog

  • start, restart, reload, and status options are also available

  • A PID is assigned to the daemon at startup

  • rsyslogd.pid file is created in the /run directory to save the PID.

  • PID is stored to prevent multiple instances of this daemon.

TheSyslog Configuration File

/etc/rsyslog.conf

  • primary syslog configuration file

View /etc/rsyslog.conf: cat /etc/rsyslog.conf

Output: Three sections:

  • Modules, Global Directives, and Rules.

    • Modules section
      • default defines two modules imuxsock and imjournal
      • loaded on demand. imuxsock module
  • furnishes support for local system logging via the logger command imjournal module

  • allows access to the systemd journal.

  • Global Directives section

    • contains three active directives.
    • Definitions in this section influence the overall functionality of the rsyslog service.
      • first directive
        • Sets the location for the storage of auxiliary files (/var/lib/rsyslog).
      • second directive
        • instructs the rsyslog service to save captured messages using traditional file formatting
      • third directive
        • directs the service to load additional configuration from files located in the /etc/rsyslogd.d/ directory.
  • Rules section

    • Right field is referred to as action. selector field
    • left field of the rules section
    • divided into two period-separated sub-fields called
      • facility (left)
        • representing one or more system process categories that generate messages
      • priority (right)
        • identifying the severity associated with the messages.
    • semicolon (;) is used as a distinction mark if multiple facility.priority groups are present. action field
    • determines the destination to send the messages to.
    • numerous supported facilities:
      • auth
      • authpriv
      • cron
      • daemon
      • kern
      • lpr
      • mail
      • news
      • syslog
      • user
      • uucp
      • local0 throughv local7
      • asterisk (*) character represents all of them.
    • supported priorities in the descending criticality order:
      • emerg
      • alert
      • crit
      • error
      • warning
      • notice
      • info
      • debug
      • none
  • If a lower priority is selected, the daemon logs all messages of the service at that and higher levels.

After modifying the syslog configuration file, Inspect it and set the verbosity: rsyslogd -N 1 (-N inspect, 1 level 1)

  • Restart or reload the rsyslog service in order for the changes to take effect.

Rotating Log Files

Log location is defined in the rsyslog configuration file.

View the /var/log/ directory: ls -l /var/log

systemd unit file called logrotate.timer under the /usr/lib/systemd/system directory invokes the logrotate service (/usr/lib/systemd/system/logrotate.service) on a daily basis. Here is what this file contains:

 [root@localhost cron.daily]# systemctl cat logrotate.timer

 # /usr/lib/systemd/system/logrotate.timer
 [Unit]
 Description=Daily rotation of log files
 Documentation=man:logrotate(8) man:logrotate.conf(5)

 [Timer]
 OnCalendar=daily
 AccuracySec=1h
 Persistent=true

 [Install]
 WantedBy=timers.target

The logrotate service runs rotations as per the schedule and other parameters defined in the /etc/logrotate.conf and additional log configuration files located in the /etc/logrotate.d directory.

/etc/cron.daily/logrotate script

  • invokes the logrotate command on a daily basis.
  • runs a rotation as per the schedule defined in /etc/logrotate.conf and the
  • configuration files for various services are located in /etc/logrotate.d/ The
  • configuration files may be modified to alter the schedule or include additional tasks on log files such as:
    • removing
    • compressing
    • emailing
 grep -v ^$ /etc/logrotate.conf
 # see "man logrotate" for details
 # global options do not affect preceding include directives
 # rotate log files weekly
 weekly
 # keep 4 weeks worth of backlogs
 rotate 4
 # create new (empty) log files after rotating old ones
 create
 # use date as a suffix of the rotated file
 dateext
 # uncomment this if you want your log files compressed
 #compress
 # packages drop log rotation information into this directory
 include /etc/logrotate.d
 # system-specific logs may be also be configured here.

content:

  • default log rotation frequency (weekly).

  • period of time (4 weeks) to retain the rotated logs before deleting them.

  • Each time a log file is rotated:

    • Empty replacement file is created with the date as a suffix to its name
    • rsyslog service is restarted
  • script presents the option of compressing the rotated files using the gzip utility.

  • logrotate command checks for the presence of additional log configuration files in /etc/logrotate.d/ and includes them as necessary.
  • directives defined in /etc/logrotate.conf file have a global effect on all log files
  • can define custom settings for a specific log file in /etc/logrotate.conf/ or create a separate file in /etc/logrotate.d/
  • settings defined in user-defined files overrides the global settings.

The /etc/logrotate.d/ directory includes additional configuration files for other service logs:

 ls -l /etc/logrotate.d/

Show the file content for btmp (records of failed user login attempts) that is used to control the rotation behavior for /var/log/btmp:

 cat /etc/logrotate.d/btmp
	``` 

- rotation is once a month. 
- replacement file created will get read/write permission bits for the owner (*root*)
- owning group will be set to *utmp*
- rsyslog service will maintain one rotated copy of the *btmp* log file.

### The Boot Log File
Logs generated during the system startup:
- Display the service startup sequence.
- Status showing whether the service was started successfully. 
- May help in any post-boot troubleshooting if required. 
- /var/log/boot.log

View /var/log/boot.log:

sudo head /var/log/boot.log

output:
- OK or FAILED 
   - indicates if the service was started successfully or not.

### The System Log File
/var/log/messages
- default location for storing most system activities, as defined in the *rsyslog.conf* file
- saves log information in plain text format 
- may be viewed with any file display utility (*cat*, *more*, *pg*, *less*, *head*, or *tail*.) 
- may be observed in real time using the *tail* command with the -f switch. The *messages* file 
- captures:
   - the date and time of the activity, 
   - hostname of the system, 
   - name and PID of the service
   - short description of the event being logged.

View /var/log messages:
```bash
tail /var/log/messages

Logging Custom Messages

The Modules section in the rsyslog.conf file

  • Provides the support via the imuxsock module to record custom messages to the messages file using the logger command.

logger command

Add a note indicating the calling user has rebooted the system:

 logger -i "System rebooted by $USER"

observe the message recorded along with the timestamp, hostname, and PID:

 tail -l /var/log/messages

-p option

  • specify a priority level either as a numerical value or in the facility.priority format.
  • default priority
    • user.notice.

View logger man pages: man logger

The systemd Journal

  • Systemd-based logging service for the collection and storage of logging data.

  • Implemented via the systemd-journald daemon.

  • Gather, store, and display logging events from a variety of sources such as:

    • the kernel
    • rsyslog and other services
    • initial RAM disk
    • alerts generated during the early boot stage. journals
  • stored in the binary format files

  • located in /run/log/journal/ (remember run is not a persistent directory)

  • structured and indexed for faster and easier searches

  • May be viewed and managed using the journalctl command.

  • Can enable persistent storage for the logs if desired.

  • RHEL runs both rsyslogd and systemd-journald concurrently.

  • data gathered by systemd-journald may be forwarded to rsyslogd for further processing and persistent storage in text format.

/etc/systemd/journald.conf

  • main config file for journald
  • contains numerous default settings that affect the overall functionality of the service.

Retrieving and Viewing Messages

journalctl command

  • retrieve messages from the journal for viewing in a variety of ways using different options.

run journalctl without any options to see all the messages generated since the last system reboot: journalctl

  • format of the messages is similar to that of the events logged to /var/log/messages
  • Each line begins with a timestamp followed by the system hostname, process name with or without a PID, and the actual message.

Display verbose output for each entry:

 journalctl -o verbose

View all events since the last system reboot:

 journalctl -b

-0 (default, since the last system reboot), -1 (the previous system reboot), -2 (two reboots before) 1 & 2 only work if there are logs persistently stored.

View only kernel-generated alerts since the last system reboot:

 journalctl -kb0

Limit the output to view 3 entries only:

 journalctl -n3

To show all alerts generated by a particular service, such as crond:

 journalctl /usr/sbin/crond

Retrieve all messages logged for a certain process, such as the PID associated with the chronyd service:

 journalctl _PID=$(pgrep chronyd)

Reveal all messages for a particular system unit, such as sshd.service:

 journalctl _SYSTEMD_UNIT=sshd.service

View all error messages logged between a date range, such as October 10, 2019 and October 16, 2019:

 journalctl --since 2019-10-16 --until 2019-10-16 -p err

Get all warning messages that have appeared today and display them in reverse chronological order:

 journalctl --since today -p warning -r 
  • Can specify the time range in hh:mm:ss format, or yesterday, today, or tomorrow as well.

follow option

 journalctl -f
 man journalctl
 man systemd-journald

Preserving Journal Information

  • enable a separate storage location for the journal to save all its messages there persistently.
  • default is under /var/log/journal/

The systemd-journald service supports four options with the Storage directive to control how the logging data is handled.

Option Description
volatile Stores data in memory only
persistent Stores data permanently under /var/log/journal and falls back to memory-only option if this directory does not exist or has a permission or other issue. The service creates /var/log/journal in case of its non-existence.
auto Similar to “persistent” but does not create /var/log/journal if it does not exist. This is the default option.
none Disables both volatile and persistent storage options. Not recommended.

Journal Data Storage Options

create the /var/log/journal/ manually and use preferred “auto” option.

  • faster query responses from in-memory storage
  • access to historical log data from on-disk storage.

Lab: Configure Persistent Storage for Journal Information

Run the necessary steps to enable and confirm persistent storage for the journals.

  1. Create a subdirectory called journal under the /var/log/ directory and confirm:
  sudo mkdir /var/log/journal
  1. Restart the systemd-journald service and confirm:
 systemctl restart systemd-journald && systemctl status systemd- journald
  1. List the new directory and observe a subdirectory matching the machine ID of the system as defined in the /etc/machine-id file is created:
 ll /var/log/journal && cat /etc/machine-id
  • This log file is rotated automatically once a month based on the settings in the journald.conf file.

Check the manual pages of journal.conf

 man journald.conf

System Tuning

System tuning service

  • Monitor connected devices
  • Tweak their parameters to improve performance or conserve power.
  • Recommended tuning profile may be identified and activated for optimal performance and power saving. tuned
  • system tuning service
  • monitor storage, networking, processor, audio, video, and a variety of other connected devices
  • Adjusts their parameters for better performance or power saving based on a chosen profile.
  • Several predefined tuning profiles may be activated either statically or dynamically.

tuned service

  • static behavior (default)

    • activates a selected profile at service startup and continues to use it until it is switched to a different profile.
  • dynamic

    • adjusts the system settings based on the live activity data received from monitored system components

tuned tuning Profiles

  • Nine profiles to support a variety of use cases.
  • Can create custom profiles from nothing or by using one of the existing profiles as a template.
  • Must to store the custom profile in /etc/tuned/

Three groups: (1) Performance (2) Power consumption (3) Balanced

Profile Description
Performance
Desktop Based on the balanced profile for desktop systems. Offers improved throughput for interactive applications.
Latency-performance For low-latency requirements
Network-latency Based on the latency-performance for faster network throughput
Network-throughput Based on the throughput-performance profile for maximum network throughput
Virtual-guest Optimized for virtual machines
Virtual-host Optimized for virtualized hosts
Power Saving
Powersave Saves maximum power at the cost of performance
Balanced/Max Profiles
Balanced Preferred choice for systems that require a balance between performance and power saving
Throughput-performance Provides maximum performance and consumes maximum power

Tuning Profiles

Predefined profiles are located in /usr/lib/tuned/ in subdirectories matching their names.

View predefined profiles:

 ls -l /usr/lib/tuned

The default active profile set on server1 and server2 is the virtual-guest profile, as the two systems are hosted in a VirtualBox virtualized environment.

The tuned-adm Command

  • single profile management command that comes with tuned
  • can list active and available profiles, query current settings, switch between profiles, and turn the tuning off.
  • Can recommend the best profile for the system based on many system attributes.

View the man pages:

 man tuned-adm

Lab 12-2: Manage Tuning Profiles

  • install the tuned service
  • start it now
  • enable it for auto-restart upon future system reboots.
  • display all available profiles and the current active profile.
  • switch to one of the available profiles and confirm.
  • determine the recommended profile for the system and switch to it.
  • deactivate tuning and reactivate it.
  • confirm the activation
  1. Install the tuned package if it is not already installed:
 dnf install tuned
  1. Start the tuned service and set it to autostart at reboots:
 systemctl --now enable tuned
  1. Confirm the startup:
 systemctl status tuned
  1. Display the list of available and active tuning profiles:
 tuned-adm list
  1. List only the current active profile:
 tuned-adm active
  1. Switch to the powersave profile and confirm:
 tuned-adm profile powersave
 tuned-adm active
  1. Determine the recommended profile for server1 and switch to it:
 [root@localhost ~]# tuned-adm recommend
 virtual-guest
 [root@localhost ~]# tuned-adm profile virtual-guest
 [root@localhost ~]# tuned-adm active
 Current active profile: virtual-guest
  1. Turn off tuning:
 [root@localhost ~]# tuned-adm off
 [root@localhost ~]# tuned-adm active
 No current active profile.
  1. Reactivate tuning and confirm:
 [root@localhost ~]# tuned-adm profile virtual-guest
 [root@localhost ~]# tuned-adm active
 Current active profile: virtual-guest 

Sysinit, Logging, and Tuning Labs

Lab: Modify Default Boot Target

  • Modify the default boot target from graphical to multi-user, and reboot the system to test it.
 systemctl set-default multi-user
  • Run the systemctl and who commands after the reboot for validation.
  • Restore the default boot target back to graphical and reboot to verify.

Lab: Record Custom Alerts

  • Write the message “This is $LOGNAME adding this marker on $(date)” to /var/log/messages.
 logger -i "This is $LOGNAME adding this marker on  $(date)"
  • Ensure that variable and command expansions work. Verify the entry in the file.
 tail -l /var/log/messages

Lab: Apply Tuning Profile

  • identify the current system tuning profile with the tuned-adm command.
 tuned-adm active
  • List all available profiles.
 tuned-adm list
  • List the recommended profile for server1.
 tuned-adm recommend
  • Apply the “balanced” profile and verify with tuned-adm.
 tuned-adm profile balanced
 tuned-adm active

Subsections of Containers

Containers

Introduction to Containers

  • Take advantage of the native virtualization features available in the Linux kernel.
  • Each container typically encapsulates one self-contained application that includes all dependencies such as library files, configuration files, software binaries, and services.

Traditional server/ application deployment:

  • Applications may have conflicting requirements in terms of shared library files, package dependencies, and software versioning.
  • Patching or updating the operating system may result in breaking an application functionality.
  • Developers perform an analysis on their current deployments before they decide whether to collocate a new application with an existing one or to go with a new server without taking the risk of breaking the current operation.

Container Model:

  • Developers can now package their application alongside dependencies, shared library files, environment variables, and other specifics in a single image file and use that file to run the application in a unique, isolated “environment” called container.

  • A container is essentially a set of processes that runs in complete seclusion on a Linux system.

  • A single Linux system running on bare metal hardware or in a virtual machine may have tens or hundreds of containers running at a time.

  • The underlying hardware may be located either on the ground or in the cloud.

  • Each container is treated as a complete whole, which can be tagged, started, stopped, restarted, or even transported to another server without impacting other running containers.

  • Any conflicts that may exist among applications, within application components, or with the operating system can be evaded.

  • Applications encapsulated to run inside containers are called containerized applications.

  • Containerization is a growing trend for architecting and deploying applications, application components, and databases in real world environments.

Containers and the Linux Features

  • Container technology employs some of the core features available in the Linux kernel.
  • These features include:
    • control groups
    • namespaces
    • seccomp (secure computing mode)
    • SELinux

Control Groups (cgroups)

  • Split processes into groups to set limits on their consumption of compute resources—CPU, memory, disk, and network I/O.
  • These restrictions result in controlling individual processes from over utilizing available resources.

Namespaces

  • Restrict the ability of process groups from seeing or accessing system resources—PIDs, network interfaces, mount points, hostname, etc.
  • Creates a layer of isolation between process groups and the rest of the system.
  • Guarantees a secure, performant, and stable environment for containerized applications as well as the host operating system.

Secure Computing Mode (seccomp) and SELinux

  • Impose security constraints thereby protecting processes from one another and the host operating system from running processes.
  • Container technology employs these characteristics to run processes isolated in a highly secure environment with full control over what they can or cannot do.

Benefits of Using Containers

Isolation

  • Containers are not affected due to changes in the host operating system or in other hosted or containerized applications, as they run fully isolated from the rest of the environment.

Loose Coupling

  • Containerized applications are loosely coupled with the underlying operating system due to their self-containment and minimal level of dependency.

Maintenance Independence

  • Maintenance is performed independently on individual containers.

Less Overhead

  • Containers require fewer system resources than do bare metal and virtual servers.

Transition Time

  • Containers require a few seconds to start and stop.

Transition Independence

  • Transitioning from one state to another (start or stop) is independent of other containers, and it does not affect or require a restart of any underlying host operating system service.

Portability

  • Containers can be migrated to other servers without modifications to the contained applications.
  • Target servers may be bare metal or virtual and located on-premises or in the cloud.

Reusability

  • The same container image can be used to run identical containers in development, test, preproduction, and production environments.
  • There is no need to rebuild the image.

Rapidity

  • The container technology allows for accelerated application development, testing, deployment, patching, and scaling.
  • There is no need for an exhaustive testing.

Version Control

  • Container images can be version-controlled, which gives users the flexibility in choosing the right version to run a container.

Container Home: Bare Metal or Virtual Machine

Containers

  • run directly on the underlying operating system whether it be running on a bare metal server or in a virtual machine.
  • Share hardware and operating system resources securely among themselves.
  • Containerized applications stay lightweight and isolated, and run in parallel.
  • Share the same Linux kernel and require far fewer hardware resources than do virtual machines, which contributes to their speedy start and stop.
  • Given the presence of an extra layer of hypervisor services, it may be more beneficial and economical to run containers directly on non-virtualized physical servers.

Container Images and Container Registries

  • Launching a container requires a pre-packaged image to be available.

container image

  • Essentially a static file that is built with all necessary components (application binaries, library files, configuration settings, environment variables, static data files, etc.)

  • Required by an application to run smoothly, securely, and independently.

  • RHEL follows the open container initiative (OCI) to allow users to build images based on industry standard specifications that define the image format, host operating system metadata, and supported hardware architectures.

  • An OCI-compliant image can be executed and managed with OCI-compliant tools such as podman (pod manager) and Docker.

  • Images can be version-controlled giving users the suppleness to use the latest or any of the previous versions to launch their containers.

  • A single image can be used to run several containers at once.

  • Container images adhere to a standard naming convention for identification.

  • This is referred to as fully qualified image name (FQIN).

    • Comprised of four components:
      • (1) the storage location (registry_name)
      • (2) the owner or organization name (user_name)
      • (3) a unique repository name (repo_name)
      • (4) an optional version (tag).
    • The syntax of an FQIN is: registry_hostname/user_name/repo_name:tag.
  • Images are stored and maintained in public or private registries;

  • They need to be downloaded and made locally available for consumption.

  • There are several registries available on the Internet.

    • registry.redhat.io (/images/images based on official Red Hat products; requires authentication),
    • registry.access.redhat.com (requires no authentication)
    • registry.connect.redhat.com (/images/images based on third-party products)
    • hub.docker.com (Docker Hub).
  • The three Red Hat registries may be searched using the Red Hat Container Catalog at catalog.redhat.com/software/containers/search.

  • Additional registries may be added as required.

  • Private registries may also require authentication for access.

Rootful vs. Rootless Containers

  • Containers can be launched with the root user privileges (sudo or directly as the root user).

  • This gives containers full access to perform administrative functions including the ability to map privileged network ports (1024 and below).

  • Launching containers with superuser rights opens a gate to potential unauthorized access to the container host if a container is compromised due to a vulnerability or misconfiguration.

  • To secure containers and the underlying operating system, containers should be launched and interacted with as normal Linux users.

  • Such containers are referred to as rootless containers.

  • Rootless containers allow regular, unprivileged users to run containers without the ability to perform tasks that require privileged access.

Working with Images and Containers

Lab: Install Necessary Container Support

  • Install the necessary software to set the foundation for completing the exercises in the remainder of the chapter.
  • The standard RHEL 9.1 image includes a package called container-tools that consists of all the required components and commands.
  • Use the standard dnf command to install the package.

1. Install the container-tools package:

 root@server10 ~]# dnf install -y container-tools

 Upgraded:
  aardvark-dns-2:1.10.0-3.el9_4.x86_64 
  buildah-2:1.33.7-3.el9_4.x86_64         
  netavark-2:1.10.3-1.el9.x86_64            
  podman-4:4.9.4-6.el9_4.x86_64                                   
 Installed:
  container-tools-1-14.el9.noarch        
  podman-docker-4:4.9.4-6.el9_4.noarch 
  podman-remote-4:4.9.4-6.el9_4.x86_64    
  python3-podman-3:4.9.0-1.el9.noarch 
  python3-pyxdg-0.27-3.el9.noarch    
  python3-tomli-2.0.1-5.el9.noarch   
  skopeo-2:1.14.3-3.el9_4.x86_64             
  toolbox-0.0.99.5-2.el9.x86_64      
  udica-0.2.8-1.el9.noarch    

2. Verify the package installation:

 [root@server10 ~]# dnf list container-tools
 Updating Subscription Management repositories.
 Last metadata expiration check: 14:53:32 ago on Wed 31 Jul 2024 05:45:56 PM MST.
 Installed Packages
 container-tools.noarch   1-14.el9    @rhel-9-for-x86_64-appstream-rpms

podman Command

  • Finding, inspect, retrieve, and delete images
  • Run, stop, list, and delete containers.
  • Used for most of these operations.

Subcommands

Image Management

build

  • Builds an image using instructions delineated in a Container file

images

  • Lists downloaded images from local storage

inspect

  • Examines an image and displays its details

login/logout

  • Logs in/out to/from a container registry. A login may be required to access private and protected registries.

pull

  • Downloads an image to local storage from a registry

rmi

  • Removes an image from local storage

search

  • Searches for an image. The following options can be included with this subcommand:
  1. A partial image name in the search will produce a list of all images containing the partial name.
  2. The --no-trunc option makes the command exhibit output without truncating it.
  3. The --limit <number> option limits the displayed results to the specified number.

tag

  • Adds a name to an image. The default is ’latest’ to classify the image as the latest version. Older images may have specific version identifiers.

Container Management

attach

  • Attaches to a running container

exec

  • Runs a process in a running container

generate

  • Generates a systemd unit configuration file that can be used to control the operational state of a container. The --new option is important and is employed in later exercises.

info

  • Reveals system information, including the defined registries

inspect

  • Exhibits the configuration of a container

ps

  • Lists running containers (includes stopped containers with the -a option)

rm

  • Removes a container

run

  • Launches a new container from an image. Some options such as -d (detached), -i (interactive), and -t (terminal) are important and are employed in exercises where needed.

start/stop/restart

  • Starts, stops, or restarts a container

skopeo Command

  • Utilized for interacting with local and remote images and registries.
  • Has numerous subcommands available; however, you will be using only the inspect subcommand to examine the details of an image stored in a remote registry.

/etc/containers/registries.conf

  • System-wide configuration file for image registries.
  • Normal Linux users may store a customized copy of this file, if required, under the ~/.config/containers directory.
  • Settings stored in the per-user file will take precedence over those stored in the system-wide file.
    • Useful for running rootless containers.
  • Defines searchable and blocked registries.
 [root@server10 ~]# grep -Ev '^#|^$' /etc/containers/registries.conf
 unqualified-search-registries = ["registry.access.redhat.com", "registry.redhat.io", "docker.io"]
 short-name-mode = "enforcing"
  • The output shows three registries.
  • The podman command searches these registries for container images in the given order.
  • Can add additional registries to the list.

Add a private registry called registry.private.myorg.io to be added with the highest priority:

 [root@server10 ~]# vim /etc/containers/registries.conf
 unqualified-search-registries = \["registry.private.myorg.io",
 "registry.access.redhat.com", "registry.redhat.io", "docker.io"\]

If this private registry is the only one to be used, you can take the rest of the registry entries out of the list:

 unqualified-search-registries = \["registry.private.myorg.io"\]

EXAM TIP: As there is no Internet access provided during Red Hat exams, you may have to access a network-based registry to download images.

Viewing Podman Configuration and Version

  • The podman command references various system runtime and configuration files and runs certain Linux commands in the background to gather and display information.
  • For instance, it looks for registries and storage data in the system-wide and per-user configuration files, pulls memory information from the /proc/meminfo file, executes uname -rto obtain the kernel version, and so on.
  • podman’s info subcommand shows all this information.

Here is a sample when this command is executed as a normal user (user1):

 [[user1@server10 root]$ podman info
 ERRO[0000] XDG_RUNTIME_DIR directory "/run/user/0" is not owned by the current user](<[user1@server10 ~]$ podman info
 host:
  arch: amd64
  buildahVersion: 1.33.8
  cgroupControllers:
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
 ...
  • Re-run the command as root (preceded by sudo if running as user1) and compare the values for the settings “rootless” under host and “ConfigFile” and “ImageStore” under store.

  • The differences lie between where the root and rootless (normal) users store and obtain configuration data, the number of container images they have locally available, and so on.

 [root@server10 ~]# podman info
 host:
  arch: amd64
  buildahVersion: 1.33.8
  cgroupControllers:
  - cpuset
  - cpu
  - io
  - memory
  - hugetlb
  - pids
  - rdma
  - misc
...

Similarly, you can run the podman command as follows to check its version:

 [root@server10 ~]# podman version
 Client:       Podman Engine
 Version:      4.9.4-rhel
 API Version:  4.9.4-rhel
 Go Version:   go1.21.11 (Red Hat 1.21.11-1.el9_4)
 Built:        Mon Jul  1 03:27:14 2024
 OS/Arch:      linux/amd64

Image Management

Container images

  • Are available from numerous private and public registries.
  • They are pre-built for a variety of use cases.
  • You can search through registries to find the one that suits your needs.
  • You can examine their metadata before downloading them for consumption.
  • Downloaded images can be removed when no longer needed to conserve local storage.
  • The same pair of commands—podman and skopeo—is employed for these operations.

Lab: Search, Examine, Download, and Remove an Image

  • Log in to the registry.access.redhat.com registry
  • Look for an image called mysql-80 in the registry, examine its details, pull it to your system, confirm the retrieval, and finally erase it from the local storage.

1. Log in to the specified Red Hat registry:

 [user1@server10 ~]$ podman login registry.redhat.io

2. Confirm a successful login:

 [user1@server10 ~]$ podman login registry.redhat.io --get-login

3. Find the mysql-80 image in the specified registry. Add the --no-trunc option to view full output.

 [user1@server10 ~]$ podman search registry.redhat.io/mysql-80 --no-trunc
 NAME                                     DESCRIPTION
 registry.redhat.io/rhel8/mysql-80        This container image provides a containerized packaging of the MySQL mysqld daemon and client application. The  mysqld server daemon accepts connections from clients and provides access to content from MySQL databases on behalf of the clients.
...

4. Select the second image rhel9/mysql-80 for this exercise. Inspect the image without downloading it using skopeo inspect. A long output will be generated. The command uses the docker:// mechanism to access the image.

 [user1@server10 ~]$ skopeo inspect docker://registry.redhat.io/rhel9/mysql-80
 {
    "Name": "registry.redhat.io/rhel9/mysql-80",
    "Digest": "sha256:247903d2103a3c1db9401f6340ecdcd97c6244480b7a3419e6303dda650491dc",
    "RepoTags": [
        "1",
        "1-190",
        "1-190.1655192188",
        "1-190.1655192188-source",
        "1-190-source",
        "1-197",
        "1-197-source",
        "1-206",
...

Output:

  • Shows older versions under RepoTags

  • Creation time for the latest version

  • Build date of the image

  • description

  • other information.

  • It is a good practice to analyze the metadata of an image prior to downloading and consuming it.

5. Download the image by specifying the fully qualified image name using podman pull:

 [user1@server10 ~]$ podman pull docker://registry.redhat.io/rhel9/mysql-80
 Trying to pull registry.redhat.io/rhel9/mysql-80:latest...
 Getting image source signatures
 Checking if image destination supports signatures
 Copying blob 846c0bdf4e30 done   | 
 Copying blob cc296d75b612 done   | 
 Copying blob db22e630b1c7 done   | 
 Copying config b5782120a3 done   | 
 Writing manifest to image destination
 Storing signatures
 b5782120a320e5915d86555e661c357cfa56dd8320ba4c54a58caa1e1c91925f

6. List the image to confirm the retrieval using podman images:

 [user1@server10 ~]$ podman images
 REPOSITORY                         TAG         IMAGE ID      CREATED      SIZE
 registry.redhat.io/rhel9/mysql-80  latest      b5782120a320  2 weeks ago  555 MB

7. Display the image’s details using podman inspect:

 [user1@server10 ~]$ podman inspect mysql-80
 [
     {
          "Id": "b5782120a320e5915d86555e661c357cfa56dd8320ba4c54a58caa1e1c91925f",
          "Digest": "sha256:247903d2103a3c1db9401f6340ecdcd97c6244480b7a3419e6303dda650491dc",
          "RepoTags": [
               "registry.redhat.io/rhel9/mysql-80:latest"
          ],

8. Remove the mysql-80 image from local storage:

 [user1@server10 ~]$ podman rmi mysql-80
 Untagged: registry.redhat.io/rhel9/mysql-80:latest
 Deleted: b5782120a320e5915d86555e661c357cfa56dd8320ba4c54a58caa1e1c91925f
  • Shows the ID of the image after deletion.

9. Confirm the removal:

 [user1@server10 ~]$ podman images
 REPOSITORY  TAG         IMAGE ID    CREATED     SIZE

Containerfile

  • Can build a custom image by outlining the steps you need to be run in a file called Containerfile.
  • The podman command can then be used to read those instructions and executes them to produce a new image.
  • File name containerfile is widespread; but you can use any name of your liking.

Instructions that may be utilized inside a Containerfile to perform specific functions during the build process:

CMD

  • Runs a command

COPY

  • Copies files to the specified location

ENV

  • Defines environment variables to be used during the build process

EXPOSE

  • A port number that will be opened when a container is launched using this image

FROM

  • Identifies the base container image to use

RUN

  • Executes the specified commands

USER

  • Defines a non-root user to run the commands as

WORKDIR

  • Sets the working directory. This directory is automatically created if it does not already exist.

A sample container file is presented below:

 [user1@server10 ~]$ vim containerfile
 # Use RHEL9 base image
 FROM registry.redhat.io/ubi9/ubi

 # Install Apache web server software
 RUN dnf -y install httpd

 # Copy the website
 COPY ./index.html /var/www/html/

 # Expose Port 80/tcp
 EXPOSE 80

 # Start Apache web server
 CMD ["httpd"]
  • The index.html file may contain a basic statement such as “This is a custom-built Apache web server container image based on RHEL 9”.

Lab: Use Containerfile to Build Image

  • Use a containerfile to build a custom image based on the latest version of the RHEL 9 universal base image (ubi) available from a Red Hat container registry.
  • Confirm the image creation.
  • Use the podman command for these activities.

1. Log in to the specified Red Hat registry:

 [user1@server10 ~]$ podman login registry.redhat.io
 Authenticating with existing credentials for registry.redhat.io
 Existing credentials are valid. Already logged in to registry.redhat.io

2. Confirm a successful login:

 [user1@server10 ~]$ podman login registry.redhat.io --get-login

3. Create a file called containerfile with the following code:

 [user1@server10 ~]$ vim containerfile2
 # Use RHEL9 base image
 FROM registry.redhat.io/ubi9/ubi

 # Count the number of characters
 CMD echo "RHCSA exam is hands-on." | wc

 # Copy a local file to /tmp
 COPY ./testfile /tmp

4. Create a file called testfile with some random text in it and place it in the same directory as the containerfile.

 [user1@server10 ~]$ echo "boo bee doo bee doo" >> testfile
 [user1@server10 ~]$ cat testfile 
 boo bee doo bee doo

5. Build an image by specifying the containerfile name and an image tag such as ubi9-simple-image. The period character at the end represents the current directory and this is where both containerfile and testfile are located.

 [user1@server10 ~]$ podman image build -f containerfile2 -t ubi9-simple-image .
 STEP 1/3: FROM registry.redhat.io/ubi9/ubi
 Trying to pull registry.redhat.io/ubi9/ubi:latest...
 Getting image source signatures
 Checking if image destination supports signatures
 Copying blob cc296d75b612 done   | 
 Copying config 159a1e6731 done   | 
 Writing manifest to image destination
 Storing signatures
 STEP 2/3: CMD echo "RHCSA exam is hands-on." | wc
 --> 4c005bfd0b34
 STEP 3/3: COPY ./testfile /tmp
 COMMIT ubi9-simple-image
 --> a2797b06a129
 Successfully tagged localhost/ubi9-simple-image:latest
 a2797b06a1294ed06edab2ba1c21d2bddde3eb3af1d8ed286781837f62992622

6. Confirm image creation:

 [user1@server10 ~]$ podman image ls
 REPOSITORY                   TAG         IMAGE ID      CREATED        SIZE
 localhost/ubi9-simple-image  latest      a2797b06a129  2 minutes ago  220 MB
 registry.redhat.io/ubi9/ubi  latest      159a1e67312e  2 weeks ago    220 MB

Output:

  • downloaded image

  • new custom image along with their image IDs, creation time, and size.

  • Do not remove the custom image yet as you will be using it to launch a container in the next section.

Basic Container Management

  • Starting, stopping, listing, viewing information about, and deleting them.
  • Depending on the use case, containers can be launched in different ways.
  • They can:
    • Have a name assigned or be nameless
    • Have a terminal session opened for interaction
    • Execute an entry point command (the command specified at the launch time) and be auto-terminated right after.
    • etc.
  • Running containers can be stopped and restarted, or discarded if no longer needed.
  • The podman command is utilized to start containers and manage their lifecycle.
  • This command is also employed to list stopped and running containers, and view their details.

Lab: Run, Interact with, and Remove a Named Container

  • Run a container based on the latest version of the RHEL 8 ubi available in the Red Hat container registry.
  • Assign this container a name and run a few native Linux commands in a terminal window interactively.
  • Exit out of the container to mark the completion of the exercise.

1. Launch a container using ubi8 (RHEL 8). Name this container rhel8-base-os and open a terminal session for interaction:

 [user1@server10 ~]$ podman run -ti --name rhel8-base-os ubi8
 Resolved "ubi8" as an alias (/etc/containers/registries.conf.d/001-rhel-shortnames.conf)
 Trying to pull registry.access.redhat.com/ubi8:latest...
 Getting image source signatures
 Checking if image destination supports signatures
 Copying blob 8694db102e5b done   | 
 Copying config 269749ad51 done   | 
 Writing manifest to image destination
 Storing signatures
 [root@30c7cccd8490 /]# 
  • Downloaded the latest version of the specified image automatically even though no FQIN was provided.

    • This is because it searched through the registries listed in the /etc/containers/registries.conf file and retrieved the image from wherever it found it first (registry.access.redhat.com).
  • Opened a terminal session inside the container as the root user to interact with the containerized RHEL 8 OS.

  • The container ID is reflected as the hostname in the container’s command prompt (last line in the output). This is an auto-generated ID.

  • If you encounter any permission issues, delete the /etc/docker directory (if it exists) and try again.

2. Run a few basic commands such as pwd, ls, cat, and date inside the container for verification:

 [root@30c7cccd8490 /]# pwd
 /
 [root@30c7cccd8490 /]# ls
 bin   dev  home  lib64	     media  opt   root	sbin  sys  usr
 boot  etc  lib	 lost+found  mnt    proc  run	srv   tmp  var
 [root@30c7cccd8490 /]# cat /etc/redhat-release
 Red Hat Enterprise Linux release 8.10 (Ootpa)
 [root@30c7cccd8490 /]# date
 Thu Aug  1 21:09:13 UTC 2024

3. Close the terminal session when done:

 [root@30c7cccd8490 /]# exit
 exit
 [user1@server10 ~]$ 

4. Delete the container using podman rm:

 [user1@server10 ~]$ podman rm rhel8-base-os
 rhel8-base-os

Confirm the removal with podman ps.

 [user1@server10 ~]$ podman ps
 CONTAINER ID  IMAGE       COMMAND     CREATED     STATUS      PORTS       NAMES

Lab: Run a Nameless Container and Auto-Remove it After Entry Point Command Execution

  • Launch a container based on the latest version of RHEL 7 ubi available in a Red Hat container registry.
    • This image provides the base operating system layer to deploy containerized applications.
  • Enter a Linux command at the command line for execution inside the container as an entry point command and the container should be automatically deleted right after that.

1. Start a container using ubi7 (RHEL 7) and run ls as an entry point command. Remove the container as soon as the entry point command has finished running.

 [user1@server10 ~]$ podman run --rm ubi7 ls
 Resolved "ubi7" as an alias (/etc/containers/registries.conf.d/001-rhel-shortnames.conf)
 Trying to pull registry.access.redhat.com/ubi7:latest...
 Getting image source signatures
 Checking if image destination supports signatures
 Copying blob 7f2c2c4492b6 done   | 
 Copying config a084eb42a5 done   | 
 Writing manifest to image destination
 Storing signatures
 bin
 boot
 dev
 etc
 home
...

2. Confirm the container removal with podman ps:

podman ps
 CONTAINER ID  IMAGE       COMMAND     CREATED     STATUS      PORTS       NAMES

Advanced Container Management

  • Preset environment variables may be passed when launching containers or new variables may be set for containerized applications to consume for proper operation.
  • Information stored during an application execution is lost when a container is restarted or erased.
  • This behavior can be overridden by making a directory on the host available inside the container for saving data persistently.
  • Containers may be configured to start and stop with the transitioning of the host system via the systemd service. These advanced tasks are also performed with the podman command.

Containers and Port Mapping

  • Applications running in different containers often need to exchange data for proper operation.
  • For instance, a containerized Apache web server may need to talk to a MySQL database instance running in a different container.
  • It may also need to talk to the outside world over a port such as 80 or 8080.
  • To support this traffic flow, appropriate port mappings are established between the host system and each container.

EXAM TIP: As a normal user, you cannot map a host port below 1024 to a container port.

Lab: Configure Port Mapping

  • Launch a container called rhel7-port-map in detached mode (as a daemon) with host port 10000 mapped to port 8000 inside the container.
  • Use a version of the RHEL 7 image with Apache web server software pre-installed.
  • This image is available from a Red Hat container registry.
  • List the running container and confirm the port mapping.

1. Search for an Apache web server image for RHEL 7 using podman search:

 [user1@server30 ~]$ podman search registry.redhat.io/rhel7/httpd
 NAME                                      DESCRIPTION
 registry.redhat.io/rhscl/httpd-24-rhel7   Apache HTTP 2.4 Server

2. Log in to registry.redhat.io using the Red Hat credentials to access the image:

 [user1@server30 ~]$ podman login registry.redhat.io
 Username: tdavetech@gmail.com
 Password: 
 Login Succeeded!

3. Download the latest version of the Apache image using podman pull:

 [user1@server30 ~]$ podman pull registry.redhat.io/rhscl/httpd-24- rhel7
 Trying to pull registry.redhat.io/rhscl/httpd-24-rhel7:latest...
 Getting image source signatures
 Checking if image destination supports signatures
 Copying blob fd77da0b900b done   | 
 Copying blob 7f2c2c4492b6 done   | 
 Copying blob ea092d7970b2 done   | 
 Copying config 847db19d6c done   | 
 Writing manifest to image destination
 Storing signatures
 847db19d6cbc726106c901a7713d30dccc9033031ec812037c4c458319a1b328

4. Verify the download using podman images:

 [user1@server30 ~]$ podman images
 REPOSITORY                               TAG         IMAGE ID       CREATED       SIZE
 registry.redhat.io/rhscl/httpd-24-rhel7  latest      847db19d6cbc  2  months ago  332 MB

5. Launch a container named rhel7-port-map in detached mode to run the containerized Apache web server with host port 10000 mapped to container port 8000.

 [user1@server30 ~]$ podman run -dp 10000:8000 --name rhel7-port-map  httpd-24-rhel7
 cd063dff352dfbcd57dd417587513b12ca4033ed657f3baaa28d54df19d4df1c

6. Verify that the container was launched successfully using podman ps:

 [user1@server30 ~]$ podman ps
 CONTAINER ID  IMAGE                                           COMMAND               CREATED         STATUS         PORTS                    NAMES
 cd063dff352d  registry.redhat.io/rhscl/httpd-24-rhel7:latest   /usr/bin/run-http...  36 seconds ago  Up 36 seconds  0.0.0.0:10000- >8000/tcp  rhel7-port-map

7. You can also use podman port to view the mapping:

 [user1@server30 ~]$ podman port rhel7-port-map
 8000/tcp -> 0.0.0.0:10000
  • Now any inbound web traffic on host port 10000 will be redirected to the container.

Exercise 22-7: Stop, Restart, and Remove a Container

  • Stop the container, restart it, stop it again, and then erase it.
  • Use appropriate podman subcommands and verify each transition.

1. Verify the current operational state of the container rhel7-port-map:

 [user1@server30 ~]$ podman ps
 CONTAINER ID  IMAGE                                           COMMAND               CREATED        STATUS        PORTS                    NAMES
 cd063dff352d  registry.redhat.io/rhscl/httpd-24-rhel7:latest   /usr/bin/run-http...  3 minutes ago  Up 3 minutes  0.0.0.0:10000- >8000/tcp  rhel7-port-map

2. Stop the container and confirm. (the -a option with ps also includes the stopped containers in the output):

 [user1@server30 ~]$ podman stop rhel7-port-map
 rhel7-port-map

 [user1@server30 ~]$ podman ps -a
 CONTAINER ID  IMAGE                                           COMMAND               CREATED        STATUS                    PORTS                    NAMES
 cd063dff352d  registry.redhat.io/rhscl/httpd-24-rhel7:latest   /usr/bin/run-http...  6 minutes ago  Exited (0) 5 seconds ago   0.0.0.0:10000->8000/tcp  rhel7-port-map

3. Start the container and confirm:

 [user1@server30 ~]$ podman start rhel7-port-map
 rhel7-port-map
 [user1@server30 ~]$ podman ps -a
 CONTAINER ID  IMAGE                                           COMMAND               CREATED        STATUS         PORTS                    NAMES
 cd063dff352d  registry.redhat.io/rhscl/httpd-24-rhel7:latest   /usr/bin/run-http...  8 minutes ago  Up 11 seconds  0.0.0.0:10000- >8000/tcp  rhel7-port-map

4. Stop the container and remove it:

 [user1@server30 ~]$ podman rm rhel7-port-map
 rhel7-port-map

5. Confirm the removal:

 [user1@server30 ~]$ podman ps -a
 CONTAINER ID  IMAGE       COMMAND     CREATED     STATUS      PORTS       NAMES

Containers and Environment Variables

  • Many times it is necessary to pass a host’s pre-defined environment variable, such as PATH, to a containerized application for consumption.
  • Moreover, it may also be necessary at times to set new variables to inject debugging flags or sensitive information such as passwords, access keys, or other secrets for use inside containers.
  • Passing host environment variables or setting new environment variables is done at the time of launching a container.
  • The podman command allows multiple variables to be passed or set with the -e option.

EXAM TIP: Use the -e option with each variable that you want to pass or set.

Lab: Pass and Set Environment Variables

  • Launch a container using the latest version of a ubi for RHEL 9 available in a Red Hat container registry.
  • Inject the HISTSIZE environment variable, and a variable called SECRET with a value “secret123”.
  • Name this container rhel9-env-vars and have a shell terminal opened to check the variable settings.
  • Remove this container.

1. Launch a container with an interactive terminal session and inject variables HISTSIZE and SECRET as directed. Use the specified container image.

 [user1@server30 ~]$ podman run -it -e HISTSIZE -e SECRET="secret123" --name rhel9-env-vars ubi9
 Resolved "ubi9" as an alias (/etc/containers/registries.conf.d/001- rhel-shortnames.conf)
 Trying to pull registry.access.redhat.com/ubi9:latest...
 Getting image source signatures
 Checking if image destination supports signatures
 Copying blob cc296d75b612 done   | 
 Copying config 159a1e6731 done   | 
 Writing manifest to image destination
 Storing signatures
 [root@b587355b8fc1 /]# 

2. Verify both variables using the echo command:

 [root@b587355b8fc1 /]# echo $HISTSIZE $SECRET
 1000 secret123
 [root@b587355b8fc1 /]# 

3. Disconnect from the container, and stop and remove it:

 [user1@server30 ~]$ podman stop rhel9-env-vars
 rhel9-env-vars
 [user1@server30 ~]$ podman rm rhel9-env-vars
 rhel9-env-vars

Confirm the deletion:

 [user1@server30 ~]$ podman ps -a
 CONTAINER ID  IMAGE       COMMAND     CREATED     STATUS      PORTS       NAMES

Containers and Persistent Storage

  • Containers are normally launched for a period of time to run an application and then stopped or deleted when their job is finished.
  • Any data that is produced during runtime is lost on their restart, failure, or termination.
  • This data may be saved for persistence on a host directory by attaching the host directory to a container.
  • The containerized application will see the attached directory just like any other local directory and will use it to store data if it is configured to do so.
  • Any data that is saved on the directory will be available even after the container is rebooted or removed.
  • Later, this directory can be re-attached to other containers to give them access to the stored data or to save their own data.
  • The source directory on the host may itself exist on any local or remote file system.

EXAM TIP: Proper ownership, permissions, and SELinux file type must be set to ensure persistent storage is accessed and allows data writes without issues.

  • There are a few simple steps that should be performed to configure a host directory before it can be attached to a container.
  • These steps include the correct ownership, permissions, and SELinux type (container_file_t).
  • The special SELinux file type is applied to prevent containerized applications (especially those running in root containers) from gaining undesired privileged access to host files and processes, or other running containers on the host if compromised.

Lab: Attach Persistent Storage and Access Data Across Containers

  • Set up a directory on server20 and attach it to a new container.
  • Write some data to the directory while in the container.
  • Delete the container and launch another container with the same directory attached.
  • Observe the persistence of saved data in the new container and that it is accessible.
  • Remove the container to mark the completion of this exercise.

1. Create a directory called /host_data, set full permissions on it, and confirm:

 [user1@server30 ~]$ sudo mkdir /host_data
 [sudo] password for user1: 
 [user1@server30 ~]$ sudo chmod 777 /host_data/
 [user1@server30 ~]$ ll -d /host_data/
 drwxrwxrwx. 2 root root 6 Aug  1 22:59 /host_data/

2. Launch a root container called rhel9-persistent-data in interactive mode using the latest ubi9 image. Specify the attachment point (/container_data) to be used inside the container for the host directory (/host_data) Ensure the SELinux type container_file_t is automatically set on the directory and files within.

 [user1@server30 ~]$ sudo podman run --name rhel9-persistent-data -v  /host_data:/container_data:Z -it ubi9
 Resolved "ubi9" as an alias (/etc/containers/registries.conf.d/001- rhel-shortnames.conf)
 Trying to pull registry.access.redhat.com/ubi9:latest...
 Getting image source signatures
 Checking if image destination supports signatures
 Copying blob cc296d75b612 done   | 
 Copying config 159a1e6731 done   | 
 Writing manifest to image destination
 Storing signatures

3. Confirm the presence of the directory inside the container with ls on /container_data:

 [root@e8711892370f /]# ls -ldZ /container_data
 drwxrwxrwx. 2 root root  system_u:object_r:container_file_t:s0:c376,c965 6 Aug  2 05:59  /container_data

4. Create a file called testfile with the echo command under /container_data:

 [root@e8711892370f /]# echo "This is persistent storage." >  /container_data/testfile

5. Verify the file creation and the SELinux type on it:

 [root@e8711892370f /]# ls -lZ /container_data/
 total 4
 -rw-r--r--. 1 root root  system_u:object_r:container_file_t:s0:c376,c965 28 Aug  2 06:03  testfile

6. Exit out of the container and check the presence of the file in the host directory:

 [root@e8711892370f /]# exit
 exit
 [user1@server30 ~]$ ls -lZ /host_data/
 total 4
 -rw-r--r--. 1 root root  system_u:object_r:container_file_t:s0:c376,c965 28 Aug  1 23:03  testfile

7. Stop and remove the container:

 [user1@server30 ~]$ sudo podman stop rhel9-persistent-data
 rhel9-persistent-data
 [user1@server30 ~]$ sudo podman rm rhel9-persistent-data
 rhel9-persistent-data

8. Launch a new root container called rhel8-persistent-data in interactive mode using the latest ubi8 image from any of the defined registries. Specify the attachment point (/container_data2) to be used inside the container for the host directory (/host_data). Ensure the SELinux type container_file_t is automatically set on the directory and files within.

 [user1@server30 ~]$ sudo podman run -it --name rhel8-persistent-data -v /host_data:/container_data2:Z ubi8
 Resolved "ubi8" as an alias (/etc/containers/registries.conf.d/001- rhel-shortnames.conf)
 Trying to pull registry.access.redhat.com/ubi8:latest...
 Getting image source signatures
 Checking if image destination supports signatures
 Copying blob 8694db102e5b done   | 
 Copying config 269749ad51 done   | 
 Writing manifest to image destination
 Storing signatures 

9. Confirm the presence of the directory inside the container with ls on /container_data2:

 [root@af6773299c7e /]# ls -ldZ /container_data2/
 drwxrwxrwx. 2 root root  system_u:object_r:container_file_t:s0:c198,c914 22 Aug  2 06:03  /container_data2/
 [root@af6773299c7e /]# ls -lZ /container_data2/
 total 4
 -rw-r--r--. 1 root root  system_u:object_r:container_file_t:s0:c198,c914 28 Aug  2 06:03  testfile
 [root@af6773299c7e /]# cat /container_data2/testfile
 This is persistent storage.

10. Create a file called testfile2 with the echo command under /container_data2:

 [root@af6773299c7e /]# echo "This is persistent storage2." >  /container_data2/testfile2

 [root@af6773299c7e /]# ls -lZ /container_data2/
 total 8
 -rw-r--r--. 1 root root  system_u:object_r:container_file_t:s0:c198,c914 28 Aug  2 06:03  testfile
 -rw-r--r--. 1 root root  system_u:object_r:container_file_t:s0:c198,c914 29 Aug  2 06:10  testfile2

11. Exit out of the container and confirm the existence of both files in the host directory:

 [root@af6773299c7e /]# exit
 exit

 [user1@server30 ~]$ ls -lZ /host_data/
 total 8
 -rw-r--r--. 1 root root  system_u:object_r:container_file_t:s0:c198,c914 28 Aug  1 23:03  testfile
 -rw-r--r--. 1 root root  system_u:object_r:container_file_t:s0:c198,c914 29 Aug  1 23:10  testfile2

12. Stop and remove the container using the stop and rm subcommands:

 [user1@server30 ~]$ sudo podman stop rhel8-persistent-data
 rhel8-persistent-data
 [user1@server30 ~]$ sudo podman rm rhel8-persistent-data
 rhel8-persistent-data

13. Re-check the presence of the files in the host directory:

 [user1@server30 ~]$ ll /host_data
 total 8
 -rw-r--r--. 1 root root 28 Aug  1 23:03 testfile
 -rw-r--r--. 1 root root 29 Aug  1 23:10 testfile2

Container State Management with systemd

  • Multiple containers run on a single host and it becomes a challenging task to change their operational state or delete them manually.

  • In RHEL 9, these administrative functions can be automated via the systemd service

  • There are several steps that need to be completed to configure container state management via systemd.

  • These steps vary for rootful and rootless container setups and include the creation of service unit files and their storage in appropriate directory locations (~/.config/systemd/user for rootless containers and /etc/systemd/system for rootful containers).

  • Once setup and enabled, the containers will start and stop automatically as a systemd service with the host state transition or manually with the systemctl command.

  • The podman command to start and stop containers is no longer needed if the systemd setup is in place.

  • You may experience issues if you continue to use podman for container state transitioning alongside.

  • The start and stop behavior for rootless containers differs slightly from that of rootful containers.

  • For the rootless setup, the containers are started when the relevant user logs in to the host and stopped when that user logs off from all their open terminal sessions;

  • However, this default behavior can be altered by enabling lingering for that user with the loginctl command.

  • User lingering is a feature that, if enabled for a particular user, spawns a user manager for that user at system startup and keeps it running in the background to support long-running services configured for that user.

  • The user need not log in.

EXAM TIP: Make sure that you use a normal user to launch rootless containers and the root user (or sudo) for rootful containers.

  • Rootless setup does not require elevated privileges of the root user.

Lab: Configure a Rootful Container as a systemd Service

  • Create a systemd unit configuration file for managing the state of your rootful containers.
  • Launch a new container and use it as a template to generate a service unit file.
  • Stop and remove the launched container to avoid conflicts with new containers that will start.
  • Use the systemctl command to verify the automatic container start, stop, and deletion.

1. Launch a new container called rootful-container in detached mode using the latest ubi9:

 [user1@server30 ~]$ sudo podman run -dt --name rootful-container ubi9
 [sudo] password for user1: 
 0ed04dcedec418068acd14c864e95e78f56a38dd57d2349cf2c46b0de1a1bf1b

2. Confirm the new container using podman ps. Note the container ID.

 [user1@server30 ~]$ sudo podman ps
 CONTAINER ID  IMAGE                                   COMMAND      CREATED         STATUS         PORTS       NAMES
 0ed04dcedec4  registry.access.redhat.com/ubi9:latest  /bin/bash   20  seconds ago  Up 20 seconds              rootful-container

3. Create (generate) a service unit file called rootful-container.service under /etc/systemd/system while ensuring that the next new container that will be launched based on this configuration file will not require the source container to work. The tee command will show the generated file content on the screen as well as store it in the specified file.

 [user1@server30 ~]$ sudo podman generate systemd --new --name  rootful-container | sudo tee /etc/systemd/system/rootful-qcontainer.service

 [Unit]
 Description=Podman container-rootful-container.service
 Documentation=man:podman-generate-systemd(1)
 Wants=network-online.target
 After=network-online.target
 RequiresMountsFor=%t/containers

 [Service]
 Environment=PODMAN_SYSTEMD_UNIT=%n
 Restart=on-failure
 TimeoutStopSec=70
 ExecStart=/usr/bin/podman run \
	--cidfile=%t/%n.ctr-id \
	--cgroups=no-conmon \
	--rm \
	--sdnotify=conmon \
	--replace \
	-dt \
	--name rootful-container ubi9
 ExecStop=/usr/bin/podman stop \
	--ignore -t 10 \
	--cidfile=%t/%n.ctr-id
 ExecStopPost=/usr/bin/podman rm \
	-f \
	--ignore -t 10 \
	--cidfile=%t/%n.ctr-id
 Type=notify
 NotifyAccess=all

 [Install]
 WantedBy=default.target
  • The unit file has the same syntax as any other systemd service configuration file.
  • There are three sections—Unit, Service, and Install.
    • (1) The unit section provides a short description of the service, the manual page location, and the dependencies (wants and after).
    • (2) The service section highlights the full commands for starting (ExecStart) and stopping (ExecStop) containers.
      • It also highlights the commands that will be executed before the container start (ExecStartPre) and after the container stop (ExecStopPost).
      • There are a number of options and arguments with the commands to ensure a proper transition.
      • The restart on-failure stipulates that systemd will try to restart the container in the event of a failure.
    • (3) The install section identifies the operational target the host needs to be running in before this container service can start.

4. Stop and delete the source container (rootful-container):

 [user1@server30 ~]$ sudo podman stop rootful-container
 [sudo] password for user1: 
 WARN[0010] StopSignal SIGTERM failed to stop container rootful- container in 10 seconds, resorting to SIGKILL 
 rootful-container

 [user1@server30 ~]$ sudo podman rm rootful-container
 rootful-container

Verify the removal by running sudo podman ps -a:

 [user1@server30 ~]$ sudo podman ps -a
 CONTAINER ID  IMAGE       COMMAND     CREATED     STATUS      PORTS       NAMES

5. Update systemd to bring the new service under its control (reboot the system if required):

 [user1@server30 ~]$ sudo systemctl daemon-reload

6. Enable and start the container service:

 [user1@server30 ~]$ sudo systemctl enable --now rootful-container
 Created symlink /etc/systemd/system/default.target.wants/rootful- container.service → /etc/systemd/system/rootful-container.service.

7. Check the running status of the new service:

 [user1@server30 ~]$ sudo systemctl status rootful-container
 rootful-container.service - Podman container-rootful-container.s>
     Loaded: loaded (/etc/systemd/system/rootful-container.service>
     Active: active (running)

8. Verify the launch of a new container (compare the container ID with that of the source root container):

 [user1@server30 ~]$ sudo podman ps
 CONTAINER ID  IMAGE                                   COMMAND      CREATED             STATUS             PORTS       NAMES
 440a57c26186  registry.access.redhat.com/ubi9:latest  /bin/bash    About a minute ago  Up About a minute              rootful-container

9. Restart the container service using the systemctl command:

 [user1@server30 ~]$ sudo systemctl restart rootful-container
 sudo systemctl status rootful-

 [user1@server30 ~]$ sudo systemctl status rootful-container
 rootful-container.service - Podman container-rootful-container.s>
     Loaded: loaded (/etc/systemd/system/rootful-container.service>
     Active: active (running)

10. Check the status of the container again. Observe the removal of the previous container and the launch of a new container (compare container IDs).

 [user1@server30 ~]$ sudo podman ps
 CONTAINER ID  IMAGE                                   COMMAND      CREATED         STATUS             PORTS       NAMES
 0a980537b83a  registry.access.redhat.com/ubi9:latest  /bin/bash   59  seconds ago  Up About a minute              rootful-container
  • Each time the rootful-container service is restarted or server20 is rebooted, a new container will be launched.

Lab: Configure Rootless Container as a systemd Service

  • Create a systemd unit configuration file for managing the state of your rootless containers.
  • Launch a new container as conuser1 (create this user) and use it as a template to generate a service unit file.
  • Stop and remove the launched container to avoid conflicts with new containers that will start.
  • Use the systemctl command as conuser1 to verify the automatic container start, stop, and deletion.

1. Create a user account called conuser1 and assign a simple password:

 [user1@server30 ~]$ sudo useradd conuser1

 [user1@server30 ~]$ echo conuser1 | sudo passwd -- stdin conuser1
 Changing password for user conuser1.
 passwd: all authentication tokens updated  successfully.

2. Open a new terminal window on server20 and log in as conuser1. Create directory ~/.config/systemd/user to store a service unit file:

 [conuser1@server30 ~]$ mkdir ~/.config/systemd/user -p

3. Launch a new container called rootless-container in detached mode using the latest ubi8:

 [conuser1@server30 ~]$ podman run -dt --name  rootless-container ubi8
 Resolved "ubi8" as an alias  (/etc/containers/registries.conf.d/001-rhel- shortnames.conf)
 Trying to pull  registry.access.redhat.com/ubi8:latest...
 Getting image source signatures
 Checking if image destination supports signatures
 Copying blob 8694db102e5b done   | 
 Copying config 269749ad51 done   | 
 Writing manifest to image destination
 Storing signatures
 381d46ae9a3e11723c3bde35090782129e6937c461f8c2621bc9725f6b9efc27

4. Confirm the new container using podman ps. Note the container ID.

 [conuser1@server30 ~]$ podman ps
 CONTAINER ID  IMAGE                                   COMMAND     CREATED         STATUS         PORTS       NAMES
 381d46ae9a3e  registry.access.redhat.com/ubi8:latest  /bin/bash   27 seconds ago  Up 27 seconds              rootless-container

5. Create (generate) a service unit file called rootless-container.service under ~/.config/systemd/user while ensuring that the next new container that will be launched based on this configuration will not require the source container to work:

 [conuser1@server30 ~]$ podman generate systemd --new --name rootless-container >  ~/.config/systemd/user/rootless-container.service

 DEPRECATED command:
 It is recommended to use Quadlets for running  containers and pods under systemd.

 Please refer to podman-systemd.unit(5) for details.

6. Display the content of the unit file:

 [conuser1@server30 ~]$ cat  ~/.config/systemd/user/rootless-container.service 
 # container-rootless-container.service
 # autogenerated by Podman 4.9.4-rhel
 # Thu Aug  1 23:42:11 MST 2024

 [Unit]
 Description=Podman container-rootless- container.service
 Documentation=man:podman-generate-systemd(1)
 Wants=network-online.target
 After=network-online.target
 RequiresMountsFor=%t/containers

 [Service]
 Environment=PODMAN_SYSTEMD_UNIT=%n
 Restart=on-failure
 TimeoutStopSec=70
 ExecStart=/usr/bin/podman run \
	--cidfile=%t/%n.ctr-id \
	--cgroups=no-conmon \
	--rm \
	--sdnotify=conmon \
	--replace \
	-dt \
	--name rootless-container ubi8
 ExecStop=/usr/bin/podman stop \
	--ignore -t 10 \
	--cidfile=%t/%n.ctr-id
 ExecStopPost=/usr/bin/podman rm \
	-f \
	--ignore -t 10 \
	--cidfile=%t/%n.ctr-id
 Type=notify
 NotifyAccess=all

 [Install]
 WantedBy=default.target

7. Stop and delete the source container rootless-container using the stop and rm subcommands:

 [conuser1@server30 ~]$ podman stop rootless-container
 rootless-container

 [conuser1@server30 ~]$ podman rm rootless-container
 rootless-container

Verify the removal by running podman ps -a:

 [conuser1@server30 ~]$ podman ps -a
 CONTAINER ID  IMAGE       COMMAND     CREATED      STATUS      PORTS       NAMES

8. Update systemd to bring the new service to its control

 [conuser1@server30 ~]$ systemctl --user daemon-reload

9. Enable and start the container service:

 [conuser1@server30 ~]$ systemctl --user enable --now rootless-container.service 
 Created symlink  /home/conuser1/.config/systemd/user/default.target.wa nts/rootless-container.service →  /home/conuser1/.config/systemd/user/rootless- container.service.

10. Check the running status of the new service:

 conuser1@server30 ~]$ systemctl --user status  rootless-container
 rootless-container.service - Podman container- rootless-container>
     Loaded: loaded (/home/conuser1/.config/systemd/user/rootless->
     Active: active (running)

11. Verify the launch of a new container (compare the container ID with that of the source rootless container):

 [conuser1@server30 ~]$ podman ps
 CONTAINER ID  IMAGE                                   COMMAND     CREATED             STATUS              PORTS       NAMES
 57f946085605  registry.access.redhat.com/ubi8:latest  /bin/bash   About a minute ago  Up About a minute              rootless-container

12. Enable the container service to start and stop with host transition using the loginctl command (systemd login manager) and confirm:

 [conuser1@server30 ~]$ loginctl enable-linger
 [conuser1@server30 ~]$ loginctl show-user conuser1 | grep -i linger
 Linger=yes

13. Restart the container service using the systemctl command:

 [conuser1@server30 ~]$ systemctl --user restart  rootless-container
 [conuser1@server30 ~]$ systemctl --user status  rootless-container
 rootless-container.service - Podman container- rootless-container>
     Loaded: loaded  (/home/conuser1/.config/systemd/user/rootless->
     Active: active (running)

14. Check the status of the container again. Observe the removal of the previous container and the launch of a new container (compare container IDs).

 [conuser1@server30 ~]$ podman ps
 CONTAINER ID  IMAGE                                   COMMAND     CREATED         STATUS         PORTS       NAMES
 4dec33db41b5  registry.access.redhat.com/ubi8:latest  /bin/bash   41 seconds ago  Up 41 seconds              rootless-container
  • Each time the rootless-container service is restarted or server20 is rebooted, a new container will be launched. You can verify this by comparing their container IDs.

Containers DIY Labs

Lab: Launch Named Root Container with Port Mapping

  • Create a new user account called conadm on server30 and give them full sudo rights.
 [root@se

 -bash: 3: command not found
 rver30 ~]# adduser conadm
 [root@server30 ~]# visudo
 conadm ALL=(ALL)        ALL
  • As conadm with sudo (where required) on server30, inspect the latest version of ubi9 and then download it to your computer.
 [root@server30 ~]# dnf install container-tools
 [root@server30 ~]# podman login registry.redhat.io
 [conuser1@server30 ~]$ podman pull ubi9
 Resolved "ubi9" as an alias  (/etc/containers/registries.conf.d/001-rhel- shortnames.conf)
 Trying to pull  registry.access.redhat.com/ubi9:latest...
 Getting image source signatures
 Checking if image destination supports signatures
 Copying blob cc296d75b612 done   | 
 Copying config 159a1e6731 done   | 
 Writing manifest to image destination
 Storing signatures  159a1e67312ef50059357047ebe2a365afea904504fca9561abb3 85ecd942d62
 [conuser1@server30 ~]$ podman inspect ubi9
  • Launch a container called rootful-cont-port in attached terminal mode with host port 80 mapped to container port 8080.
 sudo podman run -it --name rootful-cont-port -p  80:8080 ubi9
  • Run a few basic Linux commands such as ls, pwd, df, cat /etc/redhat-release, and os-release while in the container.
 [root@349163a6e431 /]# ls
 afs  boot  etc	 lib	lost+found  mnt  proc  run    srv  tmp  var
 bin  dev   home  lib64	media	    opt  root  sbin   sys  usr

 [root@349163a6e431 /]# pwd
 /

 [root@349163a6e431 /]# df -hT
 Filesystem     Type      Size  Used Avail Use%  Mounted on
 overlay        overlay    17G  4.3G   13G  26% /
 tmpfs          tmpfs      64M     0   64M   0% /dev
 shm            tmpfs      63M     0   63M   0%  /dev/shm
 tmpfs          tmpfs     356M  6.0M  350M   2%  /etc/hosts
 devtmpfs       devtmpfs  4.0M     0  4.0M   0%  /proc/keys

 [root@349163a6e431 /]# cat /etc/redhat-release
 Red Hat Enterprise Linux release 9.4 (Plow)
  • Check to confirm the port mapping from server30.
 [conadm@server30 ~]$ sudo podman port rootful-cont- port
 8080/tcp -> 0.0.0.0:80
  • Do not remove the container yet.

Lab: Launch Nameless Rootless Container with Two Variables

  • As conadm on server30, launch a container using the latest version of ubi8 in interactive mode (-it) with two environment variables VAR1=lab1 and VAR2=lab2 defined.
 [conadm@server30 ~]$ podman run -d -e VAR1="lab1" -e VAR2="lab2" --name variables8 ubi8 
  • Check the variables from within the container.
 [root@803642faea28 /]# echo $VAR1
 lab1
 [root@803642faea28 /]# echo $VAR2
 lab2
  • Delete the container and the image when done.

Lab: Launch Named Rootless Container with Persistent Storage

  • As conadm with sudo (where required) on server30, create a directory called /host_perm1 with full permissions, and a file called str1 in it.
 [conadm@server30 ~]$ sudo mkdir /host_perm1
 [sudo] password for conadm: 
 [conadm@server30 ~]$ sudo chmod 777 /host_perm1
 [conadm@server30 ~]$ sudo touch /host_perm1/str1
  • Launch a container called rootless-cont-str in attached terminal mode with the created directory mapped to /cont_perm1 inside the container.
 [conadm@server30 ~]$ sudo podman run --name  rootless-cont-str -v /host_perm1:/cont_perm1:Z -it  ubi8
 [root@a1326200eae1 /]# 
  • While in the container, check access to the directory and the presence of the file.
 [root@a1326200eae1 /]# ls /cont_perm1
 str1
  • Create a sub-directory and a file under /cont_perm1 and exit out of the container shell.
 [root@a1326200eae1 cont_perm1]# mkdir permdir2
 [root@a1326200eae1 cont_perm1]# ls
 permdir2  str1
 [root@a1326200eae1 cont_perm1]# exit
 exit
 [conadm@server30 ~]$ 
  • List /host_perm1 on server30 to verify the sub-directory and the file.
 [conadm@server30 ~]$ sudo ls /host_perm1
 permdir2  str1
  • Stop and delete the container.
 [conadm@server30 ~]$ podman stop rootless-cont-str
 rootless-cont-str
 [conadm@server30 ~]$ podman rm rootless-cont-str
 rootless-cont-str
  • Remove /host_perm1.
 [conadm@server30 ~]$ sudo rm -r /host_perm1

Lab: Launch Named Rootless Container with Port Mapping, Environment Variables, and Persistent Storage

  • As conadm with sudo (where required) on server30, launch a named rootless container called rootless-cont-adv in attached mode with two variables (HISTSIZE=100 and MYNAME=RedHat), host port 9000 mapped to container port 8080, and /host_perm2 mounted at /cont_perm2
 [conadm@server30 ~]$ podman run --name rootless-cont-adv -v ~/host_perm2:/cont_perm2:Z -e HISTSIZE="100" -e MYNAME="RedHat" -p 9000:8080 -it --replace ubi8
 [root@79e965cd1436 /]# 
  • Check and confirm the settings while inside the container.
 [root@79e965cd1436 /]# echo $HISTSIZE
 100

 [root@79e965cd1436 /]# echo $MYNAME
 RedHat

 [root@79e965cd1436 /]# ls -ld /cont_perm2 
 drwxrwxrwx. 2 root root 6 Aug  4 02:16 /cont_perm2

 [conadm@server30 ~]$ podman port rootless-cont-adv
 8080/tcp -> 0.0.0.0:9000
  • Exit out of the container.
 [root@5d510a1b2293 /]# exit
 exit
 [conadm@server30 ~]$ 
  • Do not remove the container yet.

Lab 22-5: Control Rootless Container States via systemd

  • As conadm on server30, use the rootless-cont-adv container launched in the last lab as a template and generate a systemd service configuration file and store the file in the appropriate directory.
 [conadm@server30 ~]$ podman run --name rootless-cont-adv -v ~/host_perm2:/cont_perm2:Z -e HISTSIZE="100" -e MYNAME="RedHat" -p 9000:8080 -dt --replace ubi8
 da8faf434813242985b8e332dc06b0e6da78e7125bc36579ffc8d82b0bcafb8e
 [conadm@server30 ~]$ podman generate systemd --new --name rootless-cont-adv >  ~/.config/systemd/user/rootless-container.service

 DEPRECATED command:
 It is recommended to use Quadlets for running  containers and pods under systemd.

 Please refer to podman-systemd.unit(5) for details.
  • Stop and remove the source container rootless-cont-adv.
 [conadm@server30 ~]$ podman stop rootless-cont-adv
 rootless-cont-adv
 [conadm@server30 ~]$ podman rm rootless-cont-adv
 rootless-cont-adv
  • Add the support for the new service to systemd and enable the new service to auto-start at system reboots.
 [conadm@server30 ~]$ systemctl --user daemon-reload

 [conadm@server30 user]$ systemctl --user enable -- now rootless-container.service
 Created symlink  /home/conadm/.config/systemd/user/default.target.want s/rootless-container.service →  /home/conadm/.config/systemd/user/rootless- container.service.
  • Perform the required setup to ensure the container is launched without the need for the conadm user to log in.
 [conadm@server30 user]$ loginctl enable-linger

 [conadm@server30 user]$ loginctl show-user conadm |  grep -i linger
 Linger=yes
  • Reboot server30 and confirm a successful start of the container service and the container.
[root@rhcsa3 ~]# systemctl --user --machine=conadm@ list-units --type=service
  UNIT                           LOAD   ACTIVE SUB     DESCRIPTION           >
  dbus-broker.service            loaded active running D-Bus User Message Bus
  rootless-cont-adv.service      loaded active running Podman container-rootl>
  systemd-tmpfiles-setup.service loaded active exited  Create User's Volatile>

LOAD   = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB    = The low-level unit activation state, values depend on unit type.
3 loaded units listed. Pass --all to see loaded but inactive units, too.
To show all installed unit files use 'systemctl list-unit-files'.
[root@rhcsa3 ~]# sudo -i -u conadm podman ps -a
CONTAINER ID  IMAGE                                   COMMAND     CREATED         STATUS         PORTS                   NAMES
a48fd2c25be4  registry.access.redhat.com/ubi9:latest  /bin/bash   10 minutes ago  Up 10 minutes  0.0.0.0:9000->8080/tcp  rootless-cont-adv

Lab 22-6: Control Rootful Container States via systemd

  • As conadm with sudo where required on server10, use the rootful-cont-port container launched in Lab 22-1 as a template and generate a systemd service configuration file and store the file in the appropriate directory.
 [root@server30 ~]# podman generate systemd --new -- name rootful-cont-port | tee  /etc/systemd/system/rootful-cont-port.service
 DEPRECATED command:
 It is recommended to use Quadlets for running  containers and pods under systemd.

 Please refer to podman-systemd.unit(5) for details.
 # container-rootful-cont-port.service
 # autogenerated by Podman 4.9.4-rhel
 # Sat Aug  3 20:49:32 MST 2024

 [Unit]
 Description=Podman container-rootful-cont- port.service
 Documentation=man:podman-generate-systemd(1)
 Wants=network-online.target
 After=network-online.target
 RequiresMountsFor=%t/containers

 [Service]
 Environment=PODMAN_SYSTEMD_UNIT=%n
 Restart=on-failure
  TimeoutStopSec=70
 ExecStart=/usr/bin/podman run \
	--cidfile=%t/%n.ctr-id \
	--cgroups=no-conmon \
	--rm \
	--sdnotify=conmon \
	-d \
	--replace \
	-it \
	--name rootful-cont-port \
	-p 80:8080 ubi9
 ExecStop=/usr/bin/podman stop \
	--ignore -t 10 \
	--cidfile=%t/%n.ctr-id
 ExecStopPost=/usr/bin/podman rm \
	-f \
	--ignore -t 10 \
	--cidfile=%t/%n.ctr-id
 Type=notify
 NotifyAccess=all

 [Install]
 WantedBy=default.target
  • Stop and remove the source container rootful-cont-port.
 [root@server30 ~]# podman stop rootful-cont-port
 WARN[0010] StopSignal SIGTERM failed to stop  container rootful-cont-port in 10 seconds, resorting  to SIGKILL 
 rootful-cont-port
 [root@server30 ~]# podman ps
 CONTAINER ID  IMAGE                                   COMMAND     CREATED         STATUS         PORTS       NAMES
 fe0d07718dda  registry.access.redhat.com/ubi9:latest  /bin/bash   16 minutes ago  Up 16 minutes              rootful-container
 [root@server30 ~]# podman rm rootfil-cont-port
 Error: no container with ID or name "rootfil-cont- port" found: no such container
 [root@server30 ~]# podman rm rootful-cont-port
 rootful-cont-port
  • Add the support for the new service to systemd and enable the service to auto-start at system reboots.
 [root@server30 ~]# systemctl daemon-reload
 [root@server30 ~]# systemctl enable --now rootful- cont-port
 Created symlink  /etc/systemd/system/default.target.wants/rootful- cont-port.service → /etc/systemd/system/rootful-cont- port.service.
  • Reboot server10 and confirm a successful start of the container service and the container.
 [root@server30 ~]# reboot

 [root@server30 ~]# podman ps
 CONTAINER ID  IMAGE                                   COMMAND     CREATED             STATUS              PORTS                 NAMES
 5c030407a7d6   registry.access.redhat.com/ubi9:latest  /bin/bash    About a minute ago  Up About a minute  0.0.0.0:80- >8080/tcp  rootful-cont-port
 9d1e8a429ac6   registry.access.redhat.com/ubi9:latest  /bin/bash    About a minute ago  Up About a minute                        rootful-container
 [root@server30 ~]# 

Lab 22-7: Build Custom Image Using Containerfile

  • As conadm on server10, write a containerfile to use the latest version of ubi8 and create a user account called user-in-container in the resultant custom image.
 [conadm@server30 ~]$ vim containerfile
 FROM registry.access.redhat.com/ubi8/ubi:latest

 RUN useradd -ms /bin/bash -u 1001 user-in-container

 USER 1001
 [conadm@server30 ~]$ podman image build -f  containerfile --no-cache -t ubi8-user .
 STEP 1/3: FROM  registry.access.redhat.com/ubi8/ubi:latest
 STEP 2/3: RUN useradd -ms /bin/bash -u 1001 user-in- container
 --> b330095e91eb
 STEP 3/3: USER 1001
 COMMIT ubi8-user
 --> e8cde30fc020
 Successfully tagged localhost/ubi8-user:latest
 e8cde30fc020051caa2a4e2f58aaaf90f088709462a1314b936fd608facfdb5e
  • Test the image by launching a container in interactive mode and verifying the user.
 [conadm@server30 ~]$ podman run -ti --name test12  ubi8-user
 [user-in-container@30558ffcb227 /]$

Subsections of Cyber Security

Security Enhanced Linux

SELinux Terminology

  • Implementation of the Mandatory Access Control (MAC) architecture

    • developed by the U.S. National Security Agency (NSA)
    • flexible, enriched, and granular security controls in Linux.
    • integrated into the Linux kernel as a set of patches using the Linux Security Modules (LSM) framework
      • allows the kernel to support various security implementations
    • provides an added layer of protection above and beyond the standard Linux Discretionary Access Control (DAC) security architecture.
    • DAC includes:
      • traditional file and directory permissions
      • extended attribute settings
      • setuid/setgid bits
      • su/sudo privileges
      • etc.
    • Limits the ability of a subject (Linux user or process) to access an object (file, directory, file system, device, network interface/connection, port, pipe, socket, etc.)
      • To reduce or eliminate the potential damage the subject may be able to inflict on the system if compromised.
  • MAC controls

    • Fine-grained
    • Protect other services in the event one service is negotiated.
    • Example:
      • If the HTTP service process is compromised, the attacker can only damage the files the hacked process will have access to, and not the other processes running on the system, or the objects the other processes will have access to.
    • To ensure this coarse-grained control, MAC uses a set of defined authorization rules called policy to examine security attributes associated with subjects and objects when a subject tries to access an object, and decides whether to permit the access attempt.
    • These attributes are stored in contexts (a.k.a. labels), and are applied to both subjects and objects.
  • SELinux decisions are stored in a special cache area called Access Vector Cache (AVC).

  • This cache area is checked for each access attempt by a process to determine whether the access attempt was previously allowed.

  • With this mechanism in place, SELinux does not have to check the policy ruleset repeatedly, thus improving performance.

  • SELinux is enabled by default

    • Confines processes to the bare minimum privileges that they need to function.

Key Terms

Subject

  • Any user or process that accesses an object.
  • Examples:
    • system_u for the SELinux system user
    • unconfined_u for subjects that are not bound by the SELinux policy
  • Stored in field 1 of the context.

Object

  • A resource, such as a file, directory, hardware device, network interface/connection, port, pipe, or socket, that a subject accesses.
  • Examples:
    • object_r for general objects
    • system_r for system-owned objects
    • unconfined_r for objects that are not bound by the SELinux policy.

Access

  • An action performed by the subject on an object.
  • Examples:
    • creating, reading, or updating a file
    • creating or navigating a directory
    • accessing a network port or socket

Policy

  • A defined ruleset that is enforced system-wide
  • Used to analyze security attributes assigned to subjects and objects.
  • Referenced to decide whether to permit a subject’s access attempt to an object, or a subject’s attempt to interact with another subject.
  • The default behavior of SELinux in the absence of a rule is to deny the access.
  • Standard preconfigured policies:
    • targeted (default)
      • Any process that is targeted runs in a confined domain
      • Any process that is not targeted runs in an unconfined domain.
      • Example:
        • SELinux runs logged-in users in the unconfined domain, and the httpd process in a confined domain by default.
        • Any subject running unconfined is more vulnerable than the one running confined.
    • mls
      • Places tight security controls at deeper levels.
    • minimum
      • light version of the targeted policy
      • designed to protect only selected processes.

Context (label)

  • A tag to store security attributes for subjects and objects.
  • Every subject and object has a context assigned
  • Consists of a SELinux user, role, type (or domain), and sensitivity level.
  • Used by SELinux to make access control decisions.

Labeling

  • Mapping of files with their stored contexts.

SELinux User

  • Several predefined SELinux user identities that are authorized for a particular set of roles.
  • Linux user to SELinux user identity mapping to place SELinux user restrictions on Linux users.
  • Controls what roles and levels a process (with a particular SELinux user identity) can enter.
  • Example:
    • A Linux user cannot run the su and sudo commands or the programs located in their home directories if they are mapped to the SELinux user user_u.

Role

  • An attribute of the Role-Based Access Control (RBAC) security model that is part of SELinux.
  • Classifies who (subject) is allowed to access what (domains or types).
  • SELinux users are authorized for roles, and roles are authorized for domains and types.
  • Each subject has an associated role to ensure that the system and user processes are separated.
  • A subject can transition into a new role to gain access to other domains and types.
  • Examples:
    • user_r for ordinary users
    • sysadm_r for administrators
    • system_r for processes that initiate under the system_r role
  • Stored in field 2 of the context.

Type Enforcement (TE)

  • Identifies and limits a subject’s ability to access domains for processes, and types for files.
  • References the contexts of the subjects and objects for this enforcement.

Type

  • Attribute of type enforcement.
  • Group of objects based on uniformity in their security requirements.
  • Objects such as files and directories with common security requirements, are grouped within a specific type.
  • Examples:
    • user_home_dir_t for objects located in user home directories
    • usr_t for most objects stored in the /usr directory.
  • Stored in field 3 of a file context.

Domain

  • Determines the type of access that a process has.
  • Processes with common security requirements are grouped within a specific domain type, and they run confined within that domain.
  • Examples:
    • init_t for the systemd process
    • firewalld_t for the firewalld process
    • unconfined_t for all processes that are not bound by SELinux policy.
  • Stored in field 3 of a process context.

Rules

  • Outline how types can access each other, domains can access types, and domains can access each other.

Level

  • An attribute of Multi-Level Security (MLS) and Multi-Category Security (MCS).
  • Pair of sensitivity:category values that defines the level of security in the context.
  • category may be defined as a single value or a range of values, such as c0.c4 to represent c0 through c4.
  • In RHEL 9, the targeted policy is used as the default, which enforces MCS (MCS supports only one sensitivity level (s0) with 0 to 1023 different categories).

SELinux Contexts

SELinux Contexts for Users

  • SELinux contexts define security attributes placed on subjects and objects.
  • Each context contains a type and a security level with subject and object information.

Use the id command with the -Z option to view the context set on Linux users:

[root@server30 ~]# id -Z
unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023

Output:

  • Mapped to the SELinux unconfined_u user

  • No SELinux restrictions placed on this user.

  • All Linux users, including root, run unconfined by default, (full system access)

  • Seven confined user identities with restricted access to objects.

    • Mapped to Linux users via SELinux policy.
    • Helps safeguard the system from potential damage that Linux users might inflict on the system.

Use the seinfo query command to list the SELinux users; however, the setools-console software package must be installed before doing so.

[root@server30 ~]# seinfo -u

Users: 8
   guest_u
   root
   staff_u
   sysadm_u
   system_u
   unconfined_u
   user_u
   xguest_u

Use the semanage command to view the mapping between Linux and SELinux users:

[root@server30 ~]# semanage login -l

Login Name           SELinux User         MLS/MCS Range        Service

__default__          unconfined_u         s0-s0:c0.c1023       *
root                 unconfined_u         s0-s0:c0.c1023       *

MLS/MCS Range

  • Associated security level and the context for the Linux user (the * represents all services).
  • By default, all non-root Linux users are represented as __default__, which is mapped to the unconfined_u user in the policy.

SELinux Contexts for Processes

Determine the context for processes using the ps command with the -Z flag:

[root@server30 ~]# ps -eZ | head -2
LABEL                               PID TTY          TIME CMD
system_u:system_r:init_t:s0           1 ?        00:00:02 systemd

Output:

  • The subject system_u is a SELinux username (mapped to Linux user root)

  • Object is system_r

  • Domain init_t reveals the type of protection applied to the process.

  • Level of security s0

  • A process that is unprotected will run in the unconfined_t domain.

SELinux Contexts for Files

ls -Z

  • View context for files and directories.

Show the four attributes set on the /etc/passwd file:

[root@server30 ~]# ls -lZ /etc/passwd
-rw-r--r--. 1 root root system_u:object_r:passwd_file_t:s0 2806 Jul 19 21:54 /etc/passwd
  • Subject system_u
  • Object object_r
  • Type passwd_file_t
  • Security level s0 for the passwd file.

/etc/selinux/targeted/contexts/files/file_contexts /etc/selinux/targeted/contexts/files/file_contexts.local

  • Stores contexts for system-installed and user-created files.
  • Policy files.
  • Can be updated using the semanage command.

Copying, Moving, and Archiving Files with SELinux Contexts

  • All files in RHEL are labeled with an SELINUX security context by default.
  • New files inherit the parent directory’s context at the time of creation.

Rules for copy move and archive:

  1. If a file is copied to a different directory, the destination file will receive the destination directory’s context, unless the --preserve=context switch is specified with the cp command to retain the source file’s original context.

  2. If a copy operation overwrites the destination file in the same or different directory, the file being copied will receive the context of the overwritten file, unless the --preserve=context switch is specified with the cp command to preserve the source file’s original context.

  3. If a file is moved to the same or different directory, the SELinux context will remain intact, which may differ from the destination directory’s context.

  4. If a file is archived with the tar command, use the --selinux option to preserve the context.

SELinux Contexts for Ports

View attributes for network ports with the semanage command:

[root@server30 ~]# semanage port -l | head -7
SELinux Port Type              Proto    Port Number

afs3_callback_port_t           tcp      7001
afs3_callback_port_t           udp      7001
afs_bos_port_t                 udp      7007
afs_fs_port_t                  tcp      2040
afs_fs_port_t                  udp      7000, 7005
  • By default, SELinux allows services to listen on a restricted set of network ports only.

Domain Transitioning

  • SELinux allows a process running in one domain to enter another domain to execute an application that is restricted to run in that domain only.
  • A rule must exist in the policy to support such transition.
  • entrypoint
    • Permission setting
    • Control processes that can transition into another domain.

Example: What happens when a Linux user attempts to change their password using the /usr/bin/passwd command.

The passwd command is labeled with the passwd_exec_t type:

[root@server30 ~]# ls -lZ /usr/bin/passwd
-rwsr-xr-x. 1 root root system_u:object_r:passwd_exec_t:s0 32648 Aug 10  2021 /usr/bin/passwd

The passwd command requires access to the /etc/shadow file in order to modify a user password. The shadow file has a different type set on it (shadow_t):

**[root@server30 ~]# ls -lZ /etc/shadow
----------. 1 root root system_u:object_r:shadow_t:s0 2756 Jul 19 21:54 /etc/shadow
  • The SELinux policy has rules that specifically allow processes running in domain passwd_t to read and modify the files with type shadow_t, and allow them entrypoint permission into domain passwd_exec_t.
  • This rule enables the user’s shell process executing the passwd command to switch into the passwd_t domain and update the shadow file.

Open two terminal windows. In window 1, issue the passwd command as user1 and wait at the prompt:

[user1@server30 root]$ passwd
Changing password for user user1.
Current password: 

In window 2, run the ps command:

[root@server30 ~]# ps -eZ | grep passwd
unconfined_u:unconfined_r:passwd_t:s0-s0:c0.c1023 13001 pts/1 00:00:00 passwd
  • The passwd command (process) transitioned into the passwd_t domain to change the user password.

SELinux Booleans

  • on/off switches that SELinux uses to determine whether to permit an action.
  • Activate or deactivate certain rule in the SELinux policy immediately and without the need to recompile or reload the policy.
  • For instance, the ftpd_anon_write Boolean can be turned on to enable anonymous users to upload files.
  • This privilege can be revoked by turning this Boolean off.
  • Boolean values are stored in virtual files located in /sys/fs/selinux/booleans/.
  • The filenames match the Boolean names.

A sample listing of this directory is provided below:

[root@server30 ~]# ls -l /sys/fs/selinux/booleans/ | head -7
total 0
-rw-r--r--. 1 root root 0 Jul 23 04:44 abrt_anon_write
-rw-r--r--. 1 root root 0 Jul 23 04:44 abrt_handle_event
-rw-r--r--. 1 root root 0 Jul 23 04:44 abrt_upload_watch_anon_write
-rw-r--r--. 1 root root 0 Jul 23 04:44 antivirus_can_scan_system
-rw-r--r--. 1 root root 0 Jul 23 04:44 antivirus_use_jit
-rw-r--r--. 1 root root 0 Jul 23 04:44 auditadm_exec_content
  • The manual pages of the Booleans are available through the selinux-policy-doc package.

  • Once installed, use the -K option with the man command to bring the pages up for a specific Boolean.

  • For instance, issue man -K abrt_anon_write to view the manual pages for the abrt_anon_write Boolean.

  • Can be viewed, and flipped temporarily or for permanence.

  • New value takes effect right away.

  • Temporary changes are stored as a “1” or “0” in the corresponding Boolean file in the /sys/fs/selinux/booleans/

  • Permanent changes are saved in the policy database.

SELinux Administration

  • Controlling the activation mode, checking operational status, setting security contexts on subjects and objects, and switching Boolean values.

Utilities and the commands they provide

  • libselinux-utils
    • getenforce
    • getsebool
  • policycoreutils
    • sestatus
    • setsebool
    • restorecon
  • policycoreutils-python-utils
    • semanage
  • setools-console
    • seinfo
    • sesearch

SELinux Alert Browser

  • Graphical tool for viewing alerts and debugging SELinux issues.

  • Part of the setroubleshoot-server package.

  • In order to fully manage SELinux, you need to ensure that all these packages are installed on the system.

Management Commands

SELinux delivers a variety of commands for effective administration. Table 20-1 lists and describes the commands mentioned above plus a few more under various management categories.

Mode Management

getenforce

  • Displays the current mode of operation

grubby

  • Updates and displays information about the configuration files for the grub2 boot loader

sestatus

  • Shows SELinux runtime status and Boolean values

setenforce

  • Switches the operating mode between enforcing and permissive temporarily

Context Management

chcon

  • Changes context on files (changes do not survive file system relabeling)

restorecon

  • Restores default contexts on files by referencing the files in /etc/selinux/targeted/contexts/files/

semanage

  • Changes context on files with the fcontext subcommand (changes survive file system relabeling)

Policy Management

seinfo

  • Provides information on policy components

semanage

  • Manages policy database

sesearch

  • Searches rules in the policy database

Boolean Management

getsebool

  • Displays Booleans and their current settings.

setsebool

  • Modifies Boolean values temporarily, or in the policy database.

semanage

  • Modifies Boolean values in the policy database with the booleansubcommand.

Troubleshooting

sealert

  • The graphical troubleshooting tool

Viewing and Controlling SELinux Operational State

/etc/selinux/config

  • One of the key configuration files that controls the SELinux operational state, and sets its default type

The default content of the file is displayed below:

[root@server30 ~]# cat /etc/selinux/config

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
# See also:
# https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/using_selinux/changing-selinux-states-and-modes_using-selinux#changing-selinux-modes-at-boot-time_changing-selinux-states-and-modes
#
# NOTE: Up to RHEL 8 release included, SELINUX=disabled would also
# fully disable SELinux during boot. If you need a system with SELinux
# fully disabled instead of SELinux running with no policy loaded, you
# need to pass selinux=0 to the kernel command line. You can use grubby
# to persistently set the bootloader to boot with selinux=0:
#
#    grubby --update-kernel ALL --args selinux=0
#
# To revert back to SELinux enabled:
#
#    grubby --update-kernel ALL --remove-args selinux
#
SELINUX=enforcing
# SELINUXTYPE= can take one of these three values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected.
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted

Directives:

SELINUX

  • Sets the activation mode for SELinux.
  • Enforcing activates it and allows or denies actions based on the policy rules.
  • Permissive activates SELinux, but permits all actions.
    • It records all security violations.
    • Useful for troubleshooting and developing or tuning the policy.
  • The third option is to completely turn SELinux off. When running in enforcing mode

SELINUXTYPE

  • Dictates the type of policy to be enforced.
  • Default is targeted.

Determine the current operating mode: getenforce

Change the state to permissive and verify:

[root@server30 ~]# setenforce permissive
[root@server30 ~]# getenforce
Permissive
  • Can also “0” for permissive and a “1” for enforcing.
  • Changes will be lost at the next system reboot.
  • Edit /etc/selinux/config SELINUX directive to the desired mode for persistence.

EXAM TIP: You may switch SELinux to permissive for troubleshooting a non-functioning service. Don’t forget to change it back to enforcing when the issue is resolved.

Disable SELinux persistently: grubby --update-kernel ALL --args selinux=0

  • Appends the selinux=0 setting to the end of the “options” line in the bootloader configuration file located in the /boot/loader/entries directory:
cat /boot/loader/entries/dcb323fab47049e8b89dae2ae00d41e8-5.14.0-427.26.1.el9_4.x86_64.conf 

Revert the above: grubby --update-kernel ALL --remove-args selinux=0

Querying Status

sestatus Command

  • View the current runtime status of SELinux
  • Displays the location of principal directories, the policy in effect, and the activation mode.
[root@server30 ~]# sestatus
SELinux status:                 enabled
SELinuxfs mount:                /sys/fs/selinux
SELinux root directory:         /etc/selinux
Loaded policy name:             targeted
Current mode:                   permissive
Mode from config file:          enforcing
Policy MLS status:              enabled
Policy deny_unknown status:     allowed
Memory protection checking:     actual (secure)
Max kernel policy version:      33

-v

  • Report on security contexts set on files and processes, as listed in /etc/sestatus.conf .
  • Reports the contexts for the current process (Current context) and the init (systemd) process (Init context) under Process Contexts.
  • Reveals the file contexts for the controlling terminal and associated files under File Contexts.
[root@server30 ~]# cat /etc/sestatus.conf
[files]
/etc/passwd
/etc/shadow
/bin/bash
/bin/login
/bin/sh
/sbin/agetty
/sbin/init
/sbin/mingetty
/usr/sbin/sshd
/lib/libc.so.6
/lib/ld-linux.so.2
/lib/ld.so.1

[process]
/sbin/mingetty
/sbin/agetty
/usr/sbin/sshd
[root@server30 ~]# sestatus -v
SELinux status:                 enabled
SELinuxfs mount:                /sys/fs/selinux
SELinux root directory:         /etc/selinux
Loaded policy name:             targeted
Current mode:                   permissive
Mode from config file:          enforcing
Policy MLS status:              enabled
Policy deny_unknown status:     allowed
Memory protection checking:     actual (secure)
Max kernel policy version:      33

Process contexts:
Current context:                unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
Init context:                   system_u:system_r:init_t:s0
/sbin/agetty                    system_u:system_r:getty_t:s0-s0:c0.c1023
/usr/sbin/sshd                  system_u:system_r:sshd_t:s0-s0:c0.c1023

File contexts:
Controlling terminal:           unconfined_u:object_r:user_devpts_t:s0
/etc/passwd                     system_u:object_r:passwd_file_t:s0
/etc/shadow                     system_u:object_r:shadow_t:s0
/bin/bash                       system_u:object_r:shell_exec_t:s0
/bin/login                      system_u:object_r:login_exec_t:s0
/bin/sh                         system_u:object_r:bin_t:s0 -> system_u:object_r:shell_exec_t:s0
/sbin/agetty                    system_u:object_r:getty_exec_t:s0
/sbin/init                      system_u:object_r:bin_t:s0 -> system_u:object_r:init_exec_t:s0
/usr/sbin/sshd                  system_u:object_r:sshd_exec_t:s0

Lab: Modify SELinux File Context

  • Create a directory sedir1 under /tmp and a file sefile1 under sedir1.
  • Check the context on the directory and file.
  • Change the SELinux user and type to user_u and public_content_t on both and verify.

1. Create the hierarchy sedir1/sefile1 under /tmp:

 [root@server30 ~]# cd /tmp
 [root@server30 tmp]# mkdir sedir1
 [root@server30 tmp]# touch sedir1/sefile1

2. Determine the context on the new directory and file:

 [root@server30 tmp]# ls -ldZ sedir1
 drwxr-xr-x. 2 root root  unconfined_u:object_r:user_tmp_t:s0 21 Jul 28 15:12  sedir1 
 [root@server30 tmp]# ls -ldZ sedir1/sefile1
 -rw-r--r--. 1 root root  unconfined_u:object_r:user_tmp_t:s0 0 Jul 28 15:12  sedir1/sefile1

3. Modify the SELinux user (-u) on the directory to user_u and type (-t) to public_content_t recursively (-R) with the chcon command:

 [root@server30 tmp]# chcon -vu user_u -t  public_content_t sedir1 -R
 changing security context of 'sedir1/sefile1'
 changing security context of 'sedir1'

4. Validate the new context:

 [root@server30 tmp]# ls -ldZ sedir1
 drwxr-xr-x. 2 root root  user_u:object_r:public_content_t:s0 21 Jul 28 15:12  sedir1
 [root@server30 tmp]# ls -ldZ sedir1/sefile1
 -rw-r--r--. 1 root root  user_u:object_r:public_content_t:s0 0 Jul 28 15:12  sedir1/sefile1

Lab: Add and Apply File Context

  • Add the current context on sedir1 to the SELinux policy database to ensure a relabeling will not reset it to its previous value
  • Change the context on the directory to some random values.
  • Restore the default context from the policy database back to the directory recursively.
  1. Determine the current context:
 [root@server30 tmp]# ls -ldZ sedir1
 drwxr-xr-x. 2 root root  user_u:object_r:public_content_t:s0 21 Jul 28 15:12  sedir1
 [root@server30 tmp]# ls -ldZ sedir1/sefile1
 -rw-r--r--. 1 root root  user_u:object_r:public_content_t:s0 0 Jul 28 15:12  sedir1/sefile1
  1. Add (-a) the directory recursively to the policy database using the semanage command with the fcontext subcommand:
 [root@server30 tmp]# semanage fcontext -a -s user_u -t public_content_t "/tmp/sedir1(/.*)?"
  • The regular expression (/.*)? instructs the command to include all files and subdirectories under /tmp/sedir1.
    • Needed only if recursion is required.

The above command added the context to the /etc/selinux/targeted/contexts/files/file_contexts.local file.

  1. Validate the addition by listing (-l) the recent changes (-C) in the policy database:
 [root@server30 tmp]# semanage fcontext -Cl | grep  sedir
 /tmp/sedir1(/.*)?                                   all files           user_u:object_r:public_content_t:s0 
  1. Change the current context on sedir1 to something random (staff_u/etc_t) with the chcon command:
 root@server30 tmp]# chcon -vu staff_u -t etc_t  sedir1 -R
 changing security context of 'sedir1/sefile1'
 changing security context of 'sedir1'
  1. The security context is changed successfully. Confirm with the ls command:
 [root@server30 tmp]# ls -ldZ sedir1 ; ls -lZ  sedir1/sefile1
 drwxr-xr-x. 2 root root staff_u:object_r:etc_t:s0  21 Jul 28 15:12 sedir1
 -rw-r--r--. 1 root root staff_u:object_r:etc_t:s0 0  Jul 28 15:12 sedir1/sefile1
  1. Reinstate the context on the sedir1 directory recursively (-R) as stored in the policy database using the restorecon command: (-F option will update all attributes, only does type by default. )
$ restorecon -R -v -F sedir1
Relabeled /tmp/sedir1 from unconfined_u:object_r:public_content_t:s0 to user_u:object_r:public_content_t:s0
Relabeled /tmp/sedir1/sefile1 from unconfined_u:object_r:public_content_t:s0 to user_u:object_r:public_content_t:s0

Lab: Add and Delete Network Ports

  • Add a non-standard network port 8010 to the SELinux policy database for the httpd service.
  • Confirm the addition.
  • Remove the port from the policy and verify the deletion.
  1. List (-l) the ports for the httpd service as defined in the SELinux policy database:
 [root@server10 ~]# semanage port -l | grep ^http_port
 http_port_t                    tcp      80, 81, 443, 488, 8008, 8009, 8443, 9000

The output reveals eight network ports the httpd process is currently allowed to listen on.

  1. Add port 8010 with type http_port_t and protocol tcp to the policy:
 [root@server10 ~]# semanage port -at http_port_t -p  tcp 8010
  1. Confirm the addition:
 [root@server10 ~]# semanage port -l | grep ^http_port
 http_port_t                    tcp      8010, 80,  81, 443, 488, 8008, 8009, 8443, 9000
  1. Delete port 8010 from the policy and confirm:
 [root@server10 ~]# semanage port -dp tcp 8010
 [root@server10 ~]# semanage port -l | grep ^http_port
 http_port_t                    tcp      80, 81,  443, 488, 8008, 8009, 8443, 9000

EXAM TIP: Any non-standard port you want to use for any service, make certain to add it to the SELinux policy database with the correct type.

Lab: Copy Files with and without Context

  • Create a file called sefile2 under /tmp and display its context.
  • Copy this file to the /etc/default directory, and observe the change in the context.
  • Remove sefile2 from /etc/default, and copy it again to the same destination, ensuring that the target file receives the source file’s context.

1. Create file sefile2 under /tmp and show context:

 [root@server10 ~]# touch /tmp/sefile2
 [root@server10 ~]# ls -lZ /tmp/sefile2
 -rw-r--r--. 1 root root  unconfined_u:object_r:user_tmp_t:s0 0 Jul 29 08:44  /tmp/sefile2

2. Copy this file to the /etc/default directory, and check the context again:

 [root@server10 ~]# cp /tmp/sefile2 /etc/default/

 [root@server10 ~]# ls -lZ /etc/default/sefile2
 -rw-r--r--. 1 root root  unconfined_u:object_r:etc_t:s0 0 Jul 29 08:45  /etc/default/sefile2

3. Erase the /etc/default/sefile2 file, and copy it again with the --preserve=context option:

 [root@server10 ~]# rm /etc/default/sefile2

 [root@server10 ~]# cp --preserve=context  /tmp/sefile2 /etc/default

4. List the file to view the context:

 [root@server10 ~]# ls -lZ /etc/default/sefile2
 -rw-r--r--. 1 root root  unconfined_u:object_r:user_tmp_t:s0 0 Jul 29 08:49  /etc/default/sefile2

Exercise 20-5: View and Toggle SELinux Boolean Values

  • Display the current state of the Boolean nfs_export_all_rw.
  • Toggle its value temporarily, and reboot the system.
  • Flip its value persistently after the system has been back up.

1. Display the current setting of the Boolean nfs_export_all_rw using three different commands—getsebool, sestatus, and semanage:

 [root@server10 ~]# getsebool -a | grep  nfs_export_all_rw
 nfs_export_all_rw --> on

 [root@server10 ~]# sestatus -b | grep  nfs_export_all_rw
 nfs_export_all_rw                           on

 [root@server10 ~]# semanage boolean -l | grep  nfs_export_all_rw
 nfs_export_all_rw              (on   ,   on)  Allow  nfs to export all rw
 [root@server10 ~]# 

2. Turn off the value of nfs_export_all_rw using the setsebool command by simply furnishing “off” or “0” with it and confirm:

 [root@server10 ~]# setsebool nfs_export_all_rw 0

 [root@server10 ~]# getsebool -a | grep  nfs_export_all_rw
 nfs_export_all_rw --> off

3. Reboot the system and rerun the getsebool command to check the Boolean state:

 [root@server10 ~]# getsebool -a | grep  nfs_export_all_rw
 nfs_export_all_rw --> on

4. Set the value of the Boolean persistently (-P or -m as needed) using either of the following:

 [root@server10 ~]# setsebool -P nfs_export_all_rw off
 [root@server10 ~]# semanage boolean -m -0  nfs_export_all_rw

5. Validate the new value using the getsebool, sestatus, or semanage command:

 [root@server10 ~]# sestatus -b | grep  nfs_export_all_rw
 nfs_export_all_rw                           off
 [root@server10 ~]# semanage boolean -l | grep  nfs_export_all_rw
 nfs_export_all_rw              (off  ,  off)  Allow  nfs to export all rw
 [root@server10 ~]# semanage boolean -l | grep  nfs_export_all_rw 
 nfs_export_all_rw              (off  ,  off)  Allow  nfs to export all rw

Monitoring and Analyzing SELinux Violations

  • SELinux generates alerts for system activities when it runs in enforcing or permissive mode.

  • It writes the alerts to /var/log/audit/audit.logif the auditd daemon is running, or to /var/log/messages via the rsyslog daemon in the absence of auditd.

  • SELinux also logs the alerts that are generated due to denial of an action, and identifies them with a type tag AVC (Access Vector Cache) in the audit.log file.

  • It also writes the rejection in the messages file with a message ID, and how to view the message details.

  • SELinux denial messages are analyzed, and the audit data is examined to identify the potential cause of the rejection.

  • The results of the analysis are recorded with recommendations on how to fix it.

  • These results can be reviewed to aid in troubleshooting, and recommended actions taken to address the issue.

  • SELinux runs a service daemon called setroubleshootd that performs this analysis and examination in the background.

  • This service also has a client interface called SELinux Troubleshooter (the sealert command) that reads the data and displays it for assessment.

  • The client tool has both text and graphical interfaces.

  • The server and client components are part of the setroubleshoot-server software package that must be installed on the system prior to using this service.

How SELinux handles an incoming access request (from a subject) to a target object:

Subject (eg: a process) makes an Action request (eg: read) > SELinux Security Server checks the SELinux Policy Database > if permission is not granted the AVC Denied Message is diaplayed. If Permission is granted, then access to object (eg: a file) is granted.

su to root from user1 and view the log:

 [root@server10 ~]# cat /var/log/audit/audit.log |  tail -10
 ...
 type=USER_START msg=audit(1722274070.748:90):  pid=1394 uid=1000 auid=0 ses=1  subj=unconfined_u:unconfined_r:unconfined_t:s0- s0:c0.c1023 msg='op=PAM:session_open  grantors=pam_keyinit,pam_limits,pam_systemd,pam_unix, pam_umask,pam_xauth acct="root" exe="/usr/bin/su"  hostname=? addr=? terminal=/dev/pts/0  res=success'UID="user1" AUID="root"

WIll show avc denied if denied.

Lab Get an AVC deny message

  • Change the SELinux type on the shadow file to something random (etc_t).
  • Issue the passwd command as user1 to modify the password.
  • Restored the type on the shadow file with restorecon /etc/shadow. '
  • Re-try the password change.
  1. Change the SELinux type on /etc/shadow to something random (etc_t)
 [root@server10 ~]# chcon -vt etc_t /etc/shadow
 changing security context of '/etc/shadow'
  1. Issue the passwd command as user1 to modify the password:
 [root@server10 ~]# su user1
 [user1@server10 root]$ passwd
 Changing password for user user1.
 Current password: 
 roopasswd: Authentication token manipulation error

The following is a sample denial record from the same file in raw format:

  • AVC type
  • Related to the passwd command (comm)
  • Source context (scontext) unconfined_u:unconfined_r:passwd_t:s0-s0:c0.c1023
  • nshadow file (name) with file type (tclass) “file”
  • Target context (tcontext) system_u:object_r:etc_t:s0
  • Indicates the SELinux operating mode, which is enforcing permissive=0.
  • This message indicates that the /etc/shadow file does not have the correct context set on it, and that’s why SELinux prevented the passwd command from updating the user’s password.

Use sealert to analyze (-a) all AVC records in the audit.log file. This command produces a formatted report with all relevant details:

SELinux DIY Labs

Lab: Disable and Enable the SELinux Operating Mode

  • Check and make a note of the current SELinux operating mode.
 [root@server30 ~]# getenforce
 Enforcing
  • Modify the configuration file and set the mode to disabled.
 [root@server30 ~]# vim /etc/selinux/config 
 SELINUX=disabled
  • Reboot the system to apply the change.
 [root@server30 ~]# reboot
  • Run sudo getenforce to confirm the change when the system is up.
 [root@server30 ~]# getenforce
 Disabled
  • Restore the directive’s value to enforcing in the configuration file, and reboot to apply the new mode.
 [root@server30 ~]# vim /etc/selinux/config

 SELINUX=enforcing

 [root@server30 ~]# reboot
  • Run sudo getenforce to confirm the mode when the system is up.
 [root@server30 ~]# getenforce
 Enforcing

Lab: Modify Context on Files

  • Create directory hierarchy /tmp/d1/d2.
 mkdir -p /tmp/d1/d2
  • Check the contexts on /tmp/d1 and /tmp/d1/d2.
 [root@server30 d1]# ls -ldZ /tmp/d1
 drwxr-xr-x. 3 root root unconfined_u:object_r:user_tmp_t:s0 16 Jul 29 13:17 /tmp/d1
 [root@server30 d1]# ls -ldZ /tmp/d1/d2
 drwxr-xr-x. 2 root root unconfined_u:object_r:user_tmp_t:s0 6 Jul 29  13:17 /tmp/d1/d2
  • Change the SELinux type on /tmp/d1 to etc_t recursively with the chcon command and confirm.
 [root@server30 tmp]# chcon -Rv -t etc_t /tmp/d1
 changing security context of '/tmp/d1/d2'
 changing security context of '/tmp/d1'

 [root@server30 tmp]# ls -ldZ /tmp/d1
 drwxr-xr-x. 3 root root unconfined_u:object_r:etc_t:s0 16 Jul 29  13:17 /tmp/d1

 [root@server30 tmp]# ls -ldZ /tmp/d1/d2
 drwxr-xr-x. 2 root root unconfined_u:object_r:etc_t:s0 6 Jul 29 13:17 /tmp/d1/d2
  • Add /tmp/d1 to the policy database with the semanage command to ensure the new context is persistent on the directory hierarchy.
 [root@server30 tmp]# semanage fcontext -a -t etc_t /tmp/d1

 [root@server30 tmp]# reboot

 [root@server30 ~]# ls -ldZ /tmp/d1
 drwxr-xr-x. 3 root root unconfined_u:object_r:etc_t:s0 16 Jul 29  13:17 /tmp/d1

 [root@server30 ~]# ls -ldZ /tmp/d1/d2
 drwxr-xr-x. 2 root root unconfined_u:object_r:etc_t:s0 6 Jul 29 13:17 /tmp/d1/d2

Lab: Add Network Port to Policy Database

  • Add network port 9005 to the SELinux policy database for the secure HTTP service using the semanage command.
 [root@server30 ~]# semanage port -l | grep ^http_port
 http_port_t                    tcp      80, 81, 443, 488, 8008, 8009, 8443, 9000

 [root@server30 ~]# semanage port -at http_port_t -p tcp 9005
  • Verify the addition.
 [root@server30 ~]# semanage port -l | grep ^http_port
 http_port_t                    tcp      9005, 80, 81, 443, 488, 8008, 8009, 8443, 9000

Lab: Copy Files with and without Context

  • Create file sef1 under /tmp.
 [root@server30 ~]# touch /tmp/sef1
  • Copy the file to the /usr/local directory.
 [root@server30 ~]# cp /tmp/sef1 /usr/local
  • Check and compare the contexts on both source and destination files.
 [root@server30 ~]# ls -lZ /tmp/sef1
 -rw-r--r--. 1 root root unconfined_u:object_r:user_tmp_t:s0 0 Jul 29  13:33 /tmp/sef1

 [root@server30 ~]# ls -lZ /usr/local/sef1
 -rw-r--r--. 1 root root unconfined_u:object_r:usr_t:s0 0 Jul 29 13:33 /usr/local/sef1
  • Create another file sef2 under /tmp and copy it to the /var/local directory using the --preserve=context option with the cp command.
 [root@server30 ~]# touch /tmp/sef2
 [root@server30 ~]# cp --preserve=context /tmp/sef2 /var/local/
  • Check and compare the contexts on both source and destination files.
 [root@server30 ~]# ls -lZ /tmp/sef2 /var/local/sef2
 -rw-r--r--. 1 root root unconfined_u:object_r:user_tmp_t:s0 0 Jul 29  13:35 /tmp/sef2
 -rw-r--r--. 1 root root unconfined_u:object_r:user_tmp_t:s0 0 Jul 29  13:36 /var/local/sef2

Lab: Flip SELinux Booleans

  • Check the current value of Boolean ssh_use_tcpd using the getsebool and sestatus commands.
 [root@server30 ~]# getsebool -a | grep ssh_use_tcpd
 ssh_use_tcpd --> off
  • Use the setsebool command and toggle the value of the directive.
 [root@server30 ~]# setsebool ssh_use_tcpd 1 
  • Confirm the new value with the getsebool, semanage, or sestatus command.
 [root@server30 ~]# getsebool -a | grep ssh_use_tcpd
 ssh_use_tcpd --> on
 [root@server30 ~]# sestatus -b | grep ssh_use_tcpd
 ssh_use_tcpd                                on
 [root@server30 ~]# semanage boolean -l | grep ssh_use_tcpd 
 ssh_use_tcpd                   (on   ,  off)  Allow ssh to use tcpd

The Linux Firewall

firewalld Zones

firewalld

  • The host-based firewall solution employed in RHEL uses a kernel module called netfilter together with a filtering and packet classification framework called nftables for policing the traffic movement.
  • It also supports other advanced features such as Network Address Translation (NAT) and port forwarding.
  • This firewall solution inspects, modifies, drops, or routes incoming, outgoing, and forwarded network packets based on defined rulesets.
  • Default host-based firewall management service in RHEL
  • Ability to add, modify, or delete firewall rules immediately without disrupting current network connections or restarting the service process.
  • Also allows to save rules persistently so that they are activated automatically at system reboots.
  • Lets you perform management operations at the command line using the firewall-cmd command, graphically using the web console, or manually by editing rules files.
  • Stores the default rules in files located in the /usr/lib/firewalld directory, and those that contain custom rules in the /etc/firewalld directory.
  • The default rules files may be copied to the custom rules directory and modified.

firewalld Zones

  • Easier and transparent traffic management.
  • Define policies based on the trust level of network connections and source IP addresses.
  • A network connection can be part of only one zone at a time;
  • A zone can have multiple network connections assigned to it.
  • Zone configuration may include services, ports, and protocols that may be open or closed.
  • May include rules for advanced configuration items such as masquerading, port forwarding, NAT’ing, ICMP filters, and rich language.
  • Rules for each zone are defined and manipulated independent of other zones.

Match source ip to zone that matches address > match based on zone the interface is in > matches default zone

  • firewalld inspects each incoming packet to determine the source IP address and applies the rules of the zone that has a match for the address.

  • In the event no zone configuration matches the address, it associates the packet with the zone that has the network connection defined, and applies the rules of that zone.

  • If neither works, firewalld associates the packet with the default zone, and enforces the rules of the default zone on the packet.

  • Several predefined zone files that may be selected or customized.

  • These files include templates for traffic that must be blocked or dropped, and for traffic that is:

    • public-facing
    • internal
    • external
    • home
    • public
    • trusted
    • work-related.
  • public zone is the default zone, and it is activated by default when the firewalld service is started.

Predefined zones sorted based on the trust level from trusted to untrusted:

trusted

  • Allow all incoming traffic

internal

  • Reject all incoming traffic except for what is allowed. Intended for use on internal networks.

home

  • Reject all incoming traffic except for what is allowed. Intended for use in homes.

work

  • Reject all incoming traffic except for what is allowed. Intended for use at workplaces.

dmz

  • Reject all incoming traffic except for what is allowed. Intended for use in publicly accessible demilitarized zones.

external

  • Reject all incoming traffic except for what is allowed.
  • Outgoing IPv4 traffic forwarded through this zone is masqueraded to look like it originated from the IPv4 address of an outgoing network interface.
  • Intended for use on external networks with masquerading enabled.

public

  • Reject all incoming traffic except for what is allowed.
  • Default zone for any newly added network interfaces.
  • Intended for us in public places.

block

  • Reject all incoming traffic with icmp-host-prohibited message returned.
  • Intended for use in secure places.

drop

  • Drop all incoming traffic without responding with ICMP errors.

  • Intended for use in highly secure places.

  • For all the predefined zones, outgoing traffic is allowed by default.

Zone Configuration Files

  • firewalld stores zone rules in XML format at two locations

    • system-defined rules in the /usr/lib/firewalld/zones directory
      • can be used as templates for adding new rules, or applied instantly to any available network connection
      • automatically copied to the /etc/firewalld/zones directory if it is modified with a management tool
    • user-defined rules in the /etc/firewalld/zones directory
  • can copy the required zone file to the /etc/firewalld/zones directory manually, and make the necessary changes.

  • The firewalld service reads the files saved in this location, and applies the rules defined in them.

View the system Zones:

[root@server30 ~]# ll /usr/lib/firewalld/zones
total 40
-rw-r--r--. 1 root root 312 Nov  6  2023 block.xml
-rw-r--r--. 1 root root 306 Nov  6  2023 dmz.xml
-rw-r--r--. 1 root root 304 Nov  6  2023 drop.xml
-rw-r--r--. 1 root root 317 Nov  6  2023 external.xml
-rw-r--r--. 1 root root 410 Nov  6  2023 home.xml
-rw-r--r--. 1 root root 425 Nov  6  2023 internal.xml
-rw-r--r--. 1 root root 729 Feb 21 23:44 nm-shared.xml
-rw-r--r--. 1 root root 356 Nov  6  2023 public.xml
-rw-r--r--. 1 root root 175 Nov  6  2023 trusted.xml
-rw-r--r--. 1 root root 352 Nov  6  2023 work.xml

View the public zone:

[root@server30 ~]# cat /usr/lib/firewalld/zones/public.xml 
<?xml version="1.0" encoding="utf-8"?>
<zone>
  <short>Public</short>
  <description>For use in public areas. You do not trust the other computers on networks to not harm your computer. Only selected incoming connections are accepted.</description>
  <service name="ssh"/>
  <service name="dhcpv6-client"/>
  <service name="cockpit"/>
  <forward/>
</zone>
  • See the manual pages for firewalld.zone for details on zone configuration files.

firewalld Services

  • For easier activation and deactivation of specific rules.
  • Preconfigured firewall rules delineated for various services and stored in different files.
  • The rules consist of necessary settings, such as the port number, protocol, and possibly helper modules, to support the loading of the service.
  • Can be added to a zone.
  • By default, firewalld blocks all traffic unless a service or port is explicitly opened.

Service Configuration Files

  • firewalld stores service rules in XML format at two locations:
  • system-defined rules in the /usr/lib/firewalld/services directory
    • Can be used as templates for adding new service rules, or activated instantly.
    • A system service configuration file is automatically copied to the /etc/firewalld/services directory if it is modified with a management tool.
  • user-defined rules in the /etc/firewalld/services directory.
    • You can copy the required service file to the /etc/firewalld/services directory manually, and make the necessary changes.
  • Service reads the files saved in this location, and applies the rules defined in them.

A listing of the system service files is presented below:

root@server30 ~]# ll /usr/lib/firewalld/services
total 884
-rw-r--r--. 1 root root 352 Nov  6  2023 afp.xml
-rw-r--r--. 1 root root 399 Nov  6  2023 amanda-client.xml
-rw-r--r--. 1 root root 427 Nov  6  2023 amanda-k5-client.xml
-rw-r--r--. 1 root root 283 Nov  6  2023 amqps.xml
-rw-r--r--. 1 root root 273 Nov  6  2023 amqp.xml
-rw-r--r--. 1 root root 285 Nov  6  2023 apcupsd.xml
-rw-r--r--. 1 root root 301 Nov  6  2023 audit.xml
-rw-r--r--. 1 root root 436 Nov  6  2023 ausweisapp2.xml
-rw-r--r--. 1 root root 320 Nov  6  2023 bacula-client.xml
-rw-r--r--. 1 root root 346 Nov  6  2023 bacula.xml
-rw-r--r--. 1 root root 390 Nov  6  2023 bareos-director.xml
...
...

Shows the content of the ssh service file:

[root@server30 ~]# cat /usr/lib/firewalld/services/ssh.xml
<?xml version="1.0" encoding="utf-8"?>
<service>
  <short>SSH</short>
  <description>Secure Shell (SSH) is a protocol for logging into and executing commands on remote machines. It provides secure encrypted communications. If you plan on accessing your machine remotely via SSH over a firewalled interface, enable this option. You need the openssh-server package installed for this option to be useful.</description>
  <port protocol="tcp" port="22"/>
</service>
  • Has a name and description
  • Defines the port and protocol for the service.
  • See the manual pages for firewalld.service for details on service configuration files.

firewalld Management

  • Listing, querying, adding, changing, and removing zones, services, ports, IP sources, and network connections. Three methods:
    • firewall-cmd
    • Web interface for graphical administration.
    • Edit Zone and service templates manually

firewall-cmd Command

  • Add or remove rules from the runtime configuration, or save any modifications to service configuration for persistence.
  • Supports numerous options for the management of zones, services, ports, connections, and so on

Common options

General

--state

  • Displays the running status of firewalld

--reload

  • Reloads firewall rules from zone files. All runtime changes are lost.

--permanent

  • Stores a change persistently. The change only becomes active after a service reload or restart.

Zones

--get-default-zone

  • Shows the name of the default/active zone

--set-default-zone

  • Changes the default zone for both runtime and permanent configuration

--get-zones

  • Prints a list of available zones

–get-active-zones

  • Displays the active zone and the assigned interfaces

--list-all

  • Lists all settings for a zone

--list-all-zones

  • Lists the settings for all available zones

–zone

  • Specifies the name of the zone to work on. Without this option, the default zone is used.

Services

--get-services

  • Prints predefined services

--list-services

  • Lists services for a zone

--add-service

  • Adds a service to a zone

--remove-service

  • Removes a service from a zone

--query-service

  • Queries for the presence of a service

Ports

--list-ports

  • Lists network ports

--add-port

  • Adds a port or a range of ports to a zone

--remove-port

  • Removes a port from a zone

--query-port

  • Queries for the presence of a port

Network Connections

--list-interfaces

  • Lists network connections assigned to a zone

--add-interface

  • Binds a network connection to a zone

--change-interface

  • Changes the binding of a network connection to a different zone

--remove-interface

  • Unbinds a network connection from a zone

IP Sources

--list-sources

  • Lists IP sources assigned to a zone

--add-source

  • Adds an IP source to a zone

--change-source

  • Changes an IP source

--remove-source

  • Removes an IP source from a zone

--add and --remove options

  • --permanent switch may be specified to ensure the rule is stored in the zone configuration file under the /etc/firewalld/zones directory for persistence.

Querying the Operational Status of firewalld

Check the running status of the firewalld service using either the systemctl or the firewall-cmd command.

[root@server20 ~]# firewall-cmd --state
running

[root@server20 ~]# systemctl status firewalld -l --no-pager
● firewalld.service - firewalld - dynamic firewall daemon
     Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; preset: enabled)
     Active: active (running) since Thu 2024-07-25 13:25:21 MST; 44min ago
       Docs: man:firewalld(1)
   Main PID: 829 (firewalld)
      Tasks: 2 (limit: 11108)
     Memory: 43.9M
        CPU: 599ms
     CGroup: /system.slice/firewalld.service
             └─829 /usr/bin/python3 -s /usr/sbin/firewalld --nofork --nopid

Jul 25 13:25:21 server20 systemd[1]: Starting firewalld - dynamic firewall daemon...
Jul 25 13:25:21 server20 systemd[1]: Started firewalld - dynamic firewall daemon.

Lab: Add Services and Ports, and Manage Zones

  • Determine the current active zone.
  • Add and activate a permanent rule to allow HTTP traffic on port 80
  • Add a runtime rule for traffic intended for TCP port 443 (the HTTPS service).
  • Add a permanent rule to the internal zone for TCP port range 5901 to 5910.
  • Confirm the changes and display the contents of the affected zone files.
  • Switch the default zone to the internal zone and activate it.

1. Determine the name of the current default zone:

 [root@server20 ~]# firewall-cmd --get-default-zone
 public

2. Add a permanent rule to allow HTTP traffic on its default port:

 [root@server20 ~]# firewall-cmd --permanent --add-service  http
 success

The command made a copy of the public.xml file from /usr/lib/firewalld/zones directory into the /etc/firewalld/zones directory, and added the rule for the HTTP service.

3. Activate the new rule:

 [root@server20 zones]# firewall-cmd --reload
 success

4. Confirm the activation of the new rule:

 [root@server20 zones]# firewall-cmd --list-services
 cockpit dhcpv6-client http nfs ssh

5. Display the content of the default zone file to confirm the addition of the permanent rule:

 [root@server20 zones]# cat /etc/firewalld/zones/public.xml
 <?xml version="1.0" encoding="utf-8"?>
 <zone>
  <short>Public</short>
  <description>For use in public areas. You do not trust    the other computers on networks to not harm your computer.  Only selected incoming connections are accepted. </description>
  <service name="ssh"/>
  <service name="dhcpv6-client"/>
  <service name="cockpit"/>
  <service name="nfs"/>
  <service name="http"/>
  <forward/>
 </zone>

6. Add a runtime rule to allow traffic on TCP port 443 and verify:

 [root@server20 zones]# firewall-cmd --add-port 443/tcp
 success

 [root@server20 zones]# firewall-cmd --list-ports
 443/tcp

7. Add a permanent rule to the internal zone for TCP port range 5901 to 5910:

 [root@server20 zones]# firewall-cmd --add-port 5901-5910/tcp --permanent --zone internal
 success

8. Display the content of the internal zone file to confirm the addition of the permanent rule:

 [root@server20 zones]# cat /etc/firewalld/zones/internal.xml
 <?xml version="1.0" encoding="utf-8"?>
 <zone>
  <short>Internal</short>
  <description>For use on internal networks. You mostly  trust the other computers on the networks to not harm your  computer. Only selected incoming connections are accepted. </description>
  <service name="ssh"/>
  <service name="mdns"/>
  <service name="samba-client"/>
  <service name="dhcpv6-client"/>
  <service name="cockpit"/>
  <port port="5901-5910" protocol="tcp"/>
  <forward/>
 </zone>
  • The firewall-cmd command makes a backup of the affected zone file with a .old extension whenever an update is made to a zone.

9. Switch the default zone to internal and confirm:

 [root@server20 zones]# firewall-cmd --set-default-zone  internal
 success
 [root@server20 zones]# firewall-cmd --get-default-zone
 internal

10. Activate the rules defined in the internal zone and list the port range added earlier:

  [root@server20 zones]# firewall-cmd --list-ports
  5901-5910/tcp

Lab: Remove Services and Ports, and Manage Zones

  • Remove the two permanent rules that were added in the last lab.
  • Switch the public zone back as the default zone, and confirm the changes.

1. Remove the permanent rule for HTTP from the public zone:

 [root@server20 zones]# firewall-cmd --remove-service=http --zone public --permanent
 success
  • Must specify public zone as it is not the current default.

2. Remove the permanent rule for ports 5901 to 5910 from the internal zone:

 [root@server20 zones]# firewall-cmd --remove-port 5901- 5910/tcp --permanent
 success

3. Switch the default zone to public and validate:

 [root@server20 zones]# firewall-cmd --set-default- zone=public
 success
 [root@server20 zones]# firewall-cmd --get-default-zone 
 public

4. Activate the public zone rules, and list the current services:

 [root@server20 zones]# firewall-cmd --reload
 success
 [root@server20 zones]# firewall-cmd --list-services
 cockpit dhcpv6-client nfs ssh

Lab: Test the Effect of Firewall Rule

  • Remove the sshd service rule from the runtime configuration on server20
  • Try to access the server from server10 using the ssh command.

1. Remove the rule for the sshd service on server20:

 [root@server20 zones]# firewall-cmd --remove-service ssh
 success

2. Issue the ssh command on server10 to access server20:

 [root@server10 ~]# ssh 192.168.0.37
 ssh: connect to host 192.168.0.37 port 22: No route to host

3. Add the rule back for sshd on server20:

 [root@server20 zones]# firewall-cmd --add-service ssh
 success

4. Issue the ssh command on server10 to access server20. Enter “yes” if prompted and the password for user1.

 [root@server10 ~]# ssh 192.168.0.37
 The authenticity of host '192.168.0.37 (192.168.0.37)'  can't be established.
 ED25519 key fingerprint is  SHA256:Z8nFu0Jj1ASZeXByiy3aAWHpUhGhUmDCr+Omu/iWTjs.
 This key is not known by any other names
 Are you sure you want to continue connecting  (yes/no/[fingerprint])? yes
 Warning: Permanently added '192.168.0.37' (ED25519) to  the list of known hosts.
 root@192.168.0.37's password: 
 Web console: https://server20:9090/ or  https://192.168.0.37:9090/

 Register this system with Red Hat Insights: insights- client --register
 Create an account or view all your systems at  https://red.ht/insights-dashboard
 Last login: Thu Jul 25 13:37:47 2024 from 192.168.0.21/

The Linux Firewall DIY Labs

Lab: Add Service to Firewall

  • Add and activate a permanent rule for HTTPs traffic to the default zone.
 [root@server20 ~]# firewall-cmd --add-service  https --permanent
 success
 [root@server20 ~]# firewall-cmd --reload
 success
  • Confirm the change by viewing the zone’s XML file and running the firewall-cmd command.
 [root@server20 ~]# cat  /etc/firewalld/zones/public.xml
 <?xml version="1.0" encoding="utf-8"?>
 <zone>
  <short>Public</short>
  <description>For use in public areas. You do not trust the other computers on networks to not harm your computer. Only selected incoming connections are accepted.</description>
  <service name="ssh"/>
  <service name="dhcpv6-client"/>
  <service name="cockpit"/>
  <service name="nfs"/>
  <service name="https"/>
  <forward/>
 </zone>

 [root@server20 ~]# firewall-cmd --list-services
 cockpit dhcpv6-client https nfs ssh

Lab: Add Port Range to Firewall

  • Add and activate a permanent rule for the UDP port range 8000 to 8005 to the trusted zone.
 [root@server20 ~]# firewall-cmd --add-port 8000- 8005/udp --zone trusted --permanent
 success

 [root@server20 ~]# firewall-cmd --reload
 success
  • Confirm the change by viewing the zone’s XML file and running the firewall-cmd command.
 [root@server20 ~]# firewall-cmd --list-ports -- zone trusted
 8000-8005/udp

 [root@server20 ~]# cat /etc/firewalld/zones/trusted.xml 
 <?xml version="1.0" encoding="utf-8"?>
 <zone target="ACCEPT">
  <short>Trusted</short>
  <description>All network connections are  accepted.</description>
  <port port="8000-8005" protocol="udp"/>
  <forward/>
 </zone>

The Secure Shell Service

The OpenSSH Service

Secure Shell (SSH)

  • Delivers a secure mechanism for data transmission between source and destination systems over IP networks.
  • Designed to replace the old remote login programs that transmitted user passwords in clear text and data unencrypted.
  • Employs digital signatures for user authentication with encryption to secure a communication channel.
    • this makes it extremely hard for unauthorized people to gain access to passwords or the data in transit.
  • Monitors the data being transferred throughout a session to ensure integrity.
  • Includes a set of utilities ssh and sftp for remote users to log in, transfer files, and execute commands securely.

Common Encryption Techniques

  • Two common techniques: symmetric and asymetric

Symmetric Technique

  • Secret key encryption.
  • Uses a single key called a secret key that is generated as a result of a negotiation process between two entities at the time of their initial contact.
  • Both sides use the same secret key during subsequent communication for data encryption and decryption.

Asymmetric Technique

  • Public key encryption
  • Combination of private and public keys
    • Randomly generated and mathematically related strings of alphanumeric characters
    • attached to messages being exchanged.
  • The client transmutes the information with a public key and the server decrypts it with the paired private key.
  • Private key must be kept secure since it is private to a single sender
  • the public key is disseminated to clients.
  • used for channel encryption and user authentication.

Authentication Methods

  • Encrypted channel is established between the client and server
  • Then additional negotiations take place between the two to authenticate the user trying to access the server.
  • Methods listed in the order in which they are attempted during the authentication process:
  1. GSSAPI-based ( Generic Security Service Application Program Interface) authentication
  2. Host-based authentication
  3. Public key-based authentication
  4. Challenge-response authentication
  5. Password-based authentication

GSSAPI-Based Authentication

  • Provides a standard interface that allows security mechanisms, such as Kerberos, to be plugged in.
  • OpenSSH uses this interface and the underlying Kerberos for authentication.
  • Exchange of tokens takes place between the client and server to validate user identity.

Host-Based Authentication

  • Allows a single user, a group of users, or all users on the client to be authenticated on the server.
  • A user may be configured to log in with a matching username on the server or as a different user that already exists there.
  • For each user that requires an automatic entry on the server, a ~/.shosts file is set up containing the client name or IP address, and, optionally, a different username.
  • The same rule applies to a group of users or all users on the client that require access to the server.
    • In that case, the setup is done in the /etc/ssh/shosts.equiv file on the server.

Private/Public Key-Based Authentication

  • Uses a private/public key combination for user authentication.
  • User on the client has a private key and the server stores the corresponding public key.
  • At the login attempt, the server prompts the user to enter the passphrase associated with the key and logs the user in if the passphrase and key are validated.

Challenge-Response Authentication

  • Based on the response(s) to one or more arbitrary challenge questions that the user has to answer correctly in order to be allowed to log in to the server.

Password-Based Authentication

  • Last fall back option.
  • Server prompts the user to enter their password.
  • Checks the password against the stored entry in the shadow file and allows the user in if the password is confirmed.

OpenSSH Protocol Version and Algorithms

  • V2
  • Supports various algorithms for data encryption and user authentication (digital signatures) such as:

RSA (Rivest-Shamir-Adleman)

  • More prevalent than the rest
  • Supports both encryption and authentication.

DSA and ECDSA (Digital Signature Algorithm and Elliptic Curve Digital Signature Algorithm)

  • Authentication only.
  • Used to generate public and private key pairs for the asymmetric technique.

OpenSSH Packages

  • Installed during OS installation

openssh

  • provides the ssh-keygen command and some library routines

openssh-clients

  • includes commands, such as sftp, ssh, and ssh-copy-id, and a client configuration file /etc/ssh/ssh_config

openssh-server

  • contains the sshd service daemon, server configuration file /etc/ssh/sshd_config, and library routines.

OpenSSH Server Daemon and Client Commands

  • OpenSSH server program is sshd

sshd

  • Preconfigured and operational on new RHEL installations

  • Allows remote users to log in to the system using an ssh client program such as PuTTY or the ssh command.

  • Daemon listens on TCP port 22

    • Documented in the /etc/ssh/sshd_config file with the Port directive.
  • Use sftp instead of scp do to scp security flaws.

sftp

  • Secure remote file transfer program

ssh

  • Secure remote login command

ssh-copy-id

  • Copies public key to remote systems

ssh-keygen

  • Generates and manages private and public key pairs

Server Configuration File

/etc/ssh/sshd_config

/var/log/secure

  • log file is used to capture authentication messages.

View directives listed in /etc/ssh/sshd_config:

[root@server30 tmp]# cat /etc/ssh/sshd_config

Port

  • Port number to listen on. Default is 22.

Protocol

  • Default protocol version to use.

ListenAddress

  • Sets the local addresses the sshd service should listen on.
  • Default is to listen on all local addresses.

SyslogFacility

  • Defines the facility code to be used when logging messages to the /var/log/secure file. This is based on the configuration in the /etc/rsyslog.conf file. Default is AUTHPRIV.

LogLevel
Identifies the level of criticality for the messages to be logged. Default is INFO.

PermitRootLogin
Allows or disallows the root user to log in directly to the system. Default is yes.

PubKeyAuthentication
Enables or disables public key-based authentication. Default is yes.

AuthorizedKeysFile
Sets the name and location of the file containing a user’s authorized keys. Default is ~/.ssh/authorized_keys.

PasswordAuthentication
Enables or disables local password authentication. Default is yes.

PermitEmptyPasswords
Allows or disallows the use of null passwords. Default is no.

ChallengeResponseAuthentication
Enables or disables challenge-response authentication mechanism. Default is yes.

UsePAM
Enables or disables user authentication via PAM. If enabled, only root will be able to run the sshd daemon. Default is yes.

X11Forwarding
Allows or disallows remote access to graphical applications. Default is yes.

Client Configuration File

/etc/ssh/ssh_config

  • Local configuration file that directs how the client should behave. This file, , is located in the /etc/ssh directory.
  • Directives preset in this file that affect all outbound ssh communication.

View the default directive settings: [root@server30 tmp]# cat /etc/ssh/sshd_config

Host

  • Container that declares directives applicable to one host, a group of hosts, or all hosts.
  • Ends when another occurrence of Host or Match is encountered. Default is *, (all hosts)

ForwardX11

  • Enables or disables automatic redirection of X11 traffic over SSH connections.
  • Default is no.

PasswordAuthentication

  • Allows or disallows password authentication.
  • Default is yes.

StrictHostKeyChecking

  • Whether to add host keys (host fingerprints) to ~/.ssh/known_hosts when accessing a host for the first time

  • What to do when the keys of a previously accessed host mismatch with what is stored in ~/.ssh/known_hosts.

  • no:

    • Adds new host keys and ignores changes to existing keys.
  • yes:

    • Adds new host keys and disallows connections to hosts with non-matching keys.
  • accept-new:

    • Adds new host keys and disallows connections to hosts with non-matching keys.
  • ask (default):

    • Prompts whether to add new host keys and disallows connections to hosts with non-matching keys.

IdentityFile

  • Defines the name and location of a file that stores a user’s private key for their identity validation.
  • Defaults are:
    • id_rsa, id_dsa, and id_ecdsa based on the type of algorithm used.
    • Corresponding public key files with .pub extension are also stored at the same directory location.

Port
Sets the port number to listen on. Default is 22.

Protocol
Specifies the default protocol version to use

~/.ssh/

  • does not exist by default
  • created when:
    • a user executes the ssh-keygen command for the first time to generate a key pair
    • A user connects to a remote ssh server and accepts its host key for the first time.
      • The client stores the server’s host key locally in a file called known_hosts along with its hostname or IP address.
      • On subsequent access attempts, the client will use this information to verify the server’s authenticity.

System Access and File Transfer

Lab: Access RHEL System from Another RHEL System

  • issue the ssh command as user1 on server10 to log in to server20.
  • Run appropriate commands on server20 for validation.
  • Log off and return to the originating system.

1. Issue the ssh command as user1 on server10:

[user1@server30 tmp]$ ssh server20

2. Issue the basic Linux commands whoami, hostname, and pwd to confirm that you are logged in as user1 on server20 and placed in the correct home directory:

[user1@server40 ~]$ whoami
user1
[user1@server40 ~]$ hostname
server40
[user1@server40 ~]$ pwd
/home/user1

3. Run the logout or the exit command or simply press the key combination Ctrl+d to log off server20 and return to server10:

[user1@server40 ~]$ exit
logout
Connection to server40 closed.

If you wish to log on as a different user such as user2 (assuming user2 exists on the target server server20), you may run the ssh command in either of the following ways:

[user1@server30 tmp]$ ssh -l user2 server40

[user1@server30 tmp]$ ssh user2@server40

Lab: Generate, Distribute, and Use SSH Keys

  • Generate a passwordless ssh key pair using RSA algorithm for user1 on server10.
  • display the private and public file contents.
  • Distribute the public key to server20 and attempt to log on to server20 from server10.
  • Show the log file message for the login attempt.

1. Log on to server10 as user1.

2. Generate RSA keys without a password (-N) and without detailed output (-q). Press Enter when prompted to provide the filename to store the private key.

[user1@server30 tmp]$ ssh-keygen -N "" -q
Enter file in which to save the key (/home/user1/.ssh/id_rsa): 

View the private key: [user1@server30 tmp]$ cat ~/.ssh/id_rsa

View the public key: [user1@server30 tmp]$ cat ~/.ssh/id_rsa.pub

3. Copy the public key file to server20 under /home/user1/.ssh directory.

user1@server30 tmp]$ ssh-copy-id server40
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/user1/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
user1@server40's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'server40'"
and check to make sure that only the key(s) you wanted were added.
  • This command also creates or updates the known_hosts file on server10 and stores the fingerprints for server20 in it.

[user1@server30 tmp]$ cat ~/.ssh/known_hosts

4. On server10, run the ssh command as user1 to connect to server20. You will not be prompted for a password because there was none assigned to the ssh keys.

[user1@server30 tmp]$ ssh server40
Register this system with Red Hat Insights: insights-client --register
Create an account or view all your systems at https://red.ht/insights-dashboard
Last login: Sun Jul 21 01:20:17 2024 from 192.168.0.30

View this login attempt in the /var/log/secure file on server20: [user1@server40 ~]$ sudo tail /var/log/secure

Executing Commands Remotely Using ssh

  • Can use ssh command to run programs without remoting in:

Execute the hostname command on server20:

[user1@server30 tmp]$ ssh server40 hostname
server40

Run the nmcli command on server20 to show (s) active network connections(c):

[user1@server30 tmp]$ ssh server40 nmcli c s
NAME    UUID                                  TYPE      DEVICE 
enp0s3  1c391bb6-20a3-4eb4-b717-1e458877dbe4  ethernet  enp0s3 
lo      175f8a4c-1907-4006-b838-eb43438d847b  loopback  lo 

sftp` command

  • Interactive file transfer tool.

On server10, to connect to server20:

[user1@server30 tmp]$ sftp server40
Connected to server40.
sftp> 

Type ? at the prompt to list available commands along with a short description:

[user1@server30 tmp]$ sftp server40
Connected to server40.
sftp> ?
Available commands:
bye                                Quit sftp
cd path                            Change remote directory to 'path'
chgrp [-h] grp path                Change group of file 'path' to 'grp'
chmod [-h] mode path               Change permissions of file 'path' to 'mode'
chown [-h] own path                Change owner of file 'path' to 'own'
df [-hi] [path]                    Display statistics for current directory or
                                   filesystem containing 'path'
exit                               Quit sftp
get [-afpR] remote [local]         Download file
help                               Display this help text
lcd path                           Change local directory to 'path'
lls [ls-options [path]]            Display local directory listing
lmkdir path                        Create local directory
ln [-s] oldpath newpath            Link remote file (-s for symlink)
lpwd                               Print local working directory
ls [-1afhlnrSt] [path]             Display remote directory listing
lumask umask                       Set local umask to 'umask'
mkdir path                         Create remote directory
progress                           Toggle display of progress meter
put [-afpR] local [remote]         Upload file
pwd                                Display remote working directory
quit                               Quit sftp
reget [-fpR] remote [local]        Resume download file
rename oldpath newpath             Rename remote file
reput [-fpR] local [remote]        Resume upload file
rm path                            Delete remote file
rmdir path                         Remove remote directory
symlink oldpath newpath            Symlink remote file
version                            Show SFTP version
!command                           Execute 'command' in local shell
!                                  Escape to local shell
?                                  Synonym for help

Example:

sftp> ls
sftp> mkdir /tmp/dir10-20
sftp> cd /tmp/dir10-20
sftp> pwd
Remote working directory: /tmp/dir10-20
sftp> put /etc/group
Uploading /etc/group to /tmp/dir10-20/group
group                                       100% 1118     1.0MB/s   00:00    
sftp> ls -l
-rw-r--r--    1 user1    user1        1118 Jul 21 01:41 group
sftp> cd ..
sftp> pwd
Remote working directory: /tmp
sftp> cd /home/user1
sftp> get /usr/bin/gzip
Fetching /usr/bin/gzip to gzip
gzip                                        100%   90KB  23.0MB/s   00:00    
sftp> 
  • lcd, lls, lpwd, and lmkdir are run on the source server.
  • Other commands are also available. (See man pages)

Type quit at the sftp> prompt to exit the program when you’re done:

sftp> quit
[user1@server30 tmp]$ 

Secure Shell Service DIY Labs

Lab: Establish Key-Based Authentication

  • Create user account user20 on both systems and assign a password.
[root@server40 ~]# adduser user20
[root@server40 ~]# passwd user20
Changing password for user user20.
New password: 
BAD PASSWORD: The password is shorter than 8 characters
Retype new password: 
passwd: all authentication tokens updated successfully.
  • As user20 on server40, generate a private/public key pair without a passphrase using the ssh-keygen command.
[user20@server40 ~]# ssh-keygen -N "" -q
Enter file in which to save the key (/root/.ssh/id_rsa): 
  • Distribute the public key to server30 with the ssh-copy-id command. [user20@server40 ~]# ssh-copy-id server30
  • Log on to server30 as user20 and accept the fingerprints for the server if presented.
[user20@server40 ~]# ssh server30
Activate the web console with: systemctl enable --now cockpit.socket

Register this system with Red Hat Insights: insights-client --register
Create an account or view all your systems at https://red.ht/insights-dashboard
Last login: Fri Jul 19 14:09:22 2024
[user20@server30 ~]# 
  • On subsequent log in attempts from server40 to server30, user20 should not be prompted for their password.

Lab: Test the Effect of PermitRootLogin Directive

  • As user1 with sudo on server30, edit the /etc/ssh/sshd_config file and change the value of the directive PermitRootLogin to “no”. [user1@server30 ~]$ sudo vim /etc/ssh/sshd_config

  • Use the systemctl command to activate the change.

[user1@server30 ~]$ systemctl restart sshd
==== AUTHENTICATING FOR org.freedesktop.systemd1.manage-units ====
Authentication is required to restart 'sshd.service'.
Authenticating as: root
Password: 
==== AUTHENTICATION COMPLETE ====
  • As root on server40, run ssh server40 (or use its IP). You’ll get permission denied message.

(this didn’t work, I think it’s because I configured passwordless authentication on here)

  • Reverse the change on server40 and retry ssh server40. You should be able to log in.

Subsections of Desktop

Configure Fedora Desktop using Ansible

sudo dnf -y install vim

### Make vim default sudoer editor
echo "Defaults editor=/usr/bin/vim" | sudo tee /etc/sudoers.d/99_custom_editor

### remove password prompts when using sudo
sudo sed -i 's/^#\s*%wheel\s\+ALL=(ALL)\s\+NOPASSWD: ALL/%wheel  ALL=(ALL) NOPASSWD: ALL/' /etc/sudoers
sudo sed -i 's/^%wheel\s\+ALL=(ALL)\s\+ALL/# %wheel  ALL=(ALL) ALL/' /etc/sudoers

sudo dnf -y install https://mirrors.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm \
    https://mirrors.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-$(rpm -E %fedora).noarch.rpm

sudo dnf5 install 'dnf5-command(groupinstall)'

sudo dnf -y groupinstall \
      "Development Tools"

/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
    echo 'eval "$(/home/linuxbrew/.linuxbrew/bin/brew shellenv)"' >> ~/.bash_profile
    eval "$(/home/linuxbrew/.linuxbrew/bin/brew shellenv)"

sudo dnf -y install ansible

ansible setup vim setup.yml

---
- name: Setup Development Environment
  hosts: localhost
  become: yes
  tasks:

    # Install Flatpak applications
    - name: Install Flatpak applications
      flatpak:
        name: "{{ item }}"
        state: present
      loop:
        - com.bitwarden.desktop
        - com.brave.Browser
        - org.gimp.GIMP
        - org.gnome.Snapshot
        - org.libreoffice.LibreOffice
        - org.remmina.Remmina
        - com.termius.Termius
        - com.slack.Slack
        - org.keepassxc.KeePassXC
        - md.obsidian.Obsidian
        - com.calibre_ebook.calibre
        - org.mozilla.Thunderbird
        - us.zoom.Zoom
        - org.wireshark.Wireshark
        - com.google.Chrome
        - io.github.shiftey.Desktop
        - io.github.dvlv.boxbuddyrs
        - com.github.tchx84.Flatseal
        - io.github.flattool.Warehouse
        - io.missioncenter.MissionCenter
        - com.github.rafostar.Clapper
        - com.mattjakeman.ExtensionManager
        - com.jgraph.drawio.desktop
        - org.adishatz.Screenshot
        - com.github.finefindus.eyedropper
        - com.github.johnfactotum.Foliate
        - com.obsproject.Studio
        - com.vivaldi.Vivaldi
        - com.vscodium.codium
        - io.podman_desktop.PodmanDesktop
        - org.kde.kdenlive
        - org.virt_manager.virt-manager
        - io.github.input_leap.input-leap
        - com.nextcloud.desktopclient.nextcloud

    # Install Development Tools group using dnf
    - name: Install Development Tools group
      dnf:
        name: "@Development Tools"
        state: present

    - name: Install @virtualization group package
      dnf:
        name: '@virtualization'
        state: present

    # Update dnf configuration
    - name: Update dnf configuration for fastestmirror and parallel downloads
      block:
        - lineinfile:
            path: /etc/dnf/dnf.conf
            line: "fastestmirror=True"
        - lineinfile:
            path: /etc/dnf/dnf.conf
            line: "max_parallel_downloads=10"
        - lineinfile:
            path: /etc/dnf/dnf.conf
            line: "defaultyes=True"
        - lineinfile:
            path: /etc/dnf/dnf.conf
            line: "keepcache=True"

    # Perform DNF update and install required packages
    - name: Update DNF and install required packages
      dnf:
        name:
          - gnome-screenshot
          - wireguard-tools
          - gnome-tweaks
          - gnome-themes-extra
          - telnet
          - nmap
        state: present

    # Set GNOME theme (using gsettings directly)
    - name: Set GNOME theme to Adwaita-dark
      shell: gsettings set org.gnome.desktop.interface gtk-theme "Adwaita-dark"
      become_user: "davidt"

    - name: Enable experimental Mutter features
      shell: gsettings set org.gnome.mutter experimental-features "['scale-monitor-framebuffer']"
      become_user: "davidt"

    # Install Go programming language
    - name: Install Go
      dnf:
        name: go
        state: present

    - name: Add Go to the PATH in .bashrc
      lineinfile:
        path: "/home/davidt/.bashrc"
        line: 'export PATH=$PATH:/usr/local/go/bin'
        state: present
      become_user: "davidt"

    - name: Source .bashrc
      shell: source /home/davidt/.bashrc
      become_user: "davidt"
      
    - name: Install pip using yum
      yum:
        name: python-pip
        state: present
    

run the playbook: ansible-playbook setup.yml

Then reboot…

Then sign into nextcloud and begin sync.

Install Homebrew packages:

brew install hugo

Install gnome extentions:

pip install --user gnome-extensions-cli
gext install "appindicatorsupport@rgcjonas.gmail.com"
gext enable "appindicatorsupport@rgcjonas.gmail.com"
gext install "legacyschemeautoswitcher@joshimukul29.gmail.com"
gext install "blur-my-shell@aunetx"
gext install "dash-to-dock@micxgx.gmail.com"
gext install "gsconnect@andyholmes.github.io"
gext install "logomenu@aryan_k"
gext install "search-light@icedman.github.com"

Restore remmina connections cp ~/Nextcloud/remmina/* ~/.var/app/org.remmina.Remmina/data/remmina/

Restore vimrc cat ~/Nextcloud/Documents/dotfiles/vimrc.bak > ~/.vimrc

Restore ~/.bashrc: (if username is the same) cat ~/Nextcloud/Documents/dotfiles/bashrc.bak > ~/.bashrc

Git config

git config --global user.email "tdavetech@gmail.com"
git config --global user.name "linuxreader"

# Store git credentials (from inside a git directory):
git config credential.helper store

OneDrive

Install: sudo dnf -y install onedrive

Start: onedrive

Display config:

onedrive --display-config

Sync and prefer local copy: onedrive --sync --local-first

Enable the user level service: `onedrive –user enable –now onedrive

Force local to the cloud onedrive –synchronize –force

Restore files from cloud onedrive –synchronize –resync

Add foce option to top of the user service file to ignore big delete flag. systemctl --user edit onedrive

[Service]
ExecStart=
ExecStart=/usr/bin/onedrive --monitor --verbose --force

Learning Touch Typing

Using Monkey Type to get my first speedtest results:

02/24/2025 55 WPM Learning to type with: https://www.typing.com/ https://www.typingclub.com/

Going to be practicing on these sites: https://10fastfingers.com/ https://www.keybr.com/ https://play.typeracer.com/

try ctrl+backspace to delete entire word

Day 1 (1 hour) Typing club Lessons 1-41

Day 2 (1 hour) Typing club Lessons 1-41

Day 3 (1 hour) Typing club Lessons 1-50

Day 4 (1 hour) Typing club Lessons 2-55

Day 5 (1 hour) Typing club Lessons 2-62

Day 6 (1 hour) Typing club Lessons 25-46 Lessons 2-10 Above 50 WPM at 100% Hands very cold and lack of sleep today.

Day 7 (1 hour) Typing Club Lessons 53 - 81

Day 8 (skipped) Busy with sick child

Day 9 (1.5 hours) Typing Club Lessons 55-107

Day 10 (30 minutes) Typing Club Lessons 108-119

Day 11 (1 hour) Typing Club Lessons 120-141

Day 12 (30 minutes) Typing Club Lessons 142-151

Day 13 (30 minutes) Typing Club Lessons 151-164

My Fedora Setup

When you first download Fedora Workstation, it’s going to be a little hard to figure out how to make it usable. Especially if you’ve never tinkered with Linux before.

This is because Fedora with Gnome desktop is a blank canvas. The point is to let you customize it to your needs. When I first install Fedora, I pull my justfile to install most of the programs I use:

`curl -sL https://raw.githubusercontent.com/linuxreader/dotfiles/main/dot_justfile -o ~/.justfile`

Install just and run the justfile

To run the just file, I then install the just program and run it on the justfile:

dnf install just

just first-install

This is my current .justfile:

first-install:
# Install flatpacks
	flatpak install --noninteractive \
      flathub com.bitwarden.desktop \
      flathub com.brave.Browser \
      flathub org.gimp.GIMP \
      flathub org.gnome.Snapshot \
      flathub org.libreoffice.LibreOffice \
      flathub org.remmina.Remmina \
      flathub com.termius.Termius \
      flathub com.slack.Slack \
      flathub org.keepassxc.KeePassXC \
      flathub md.obsidian.Obsidian \
      flathub com.calibre_ebook.calibre \
      flathub org.mozilla.Thunderbird \
      flathub us.zoom.Zoom \
      flathub org.wireshark.Wireshark \
      flathub com.nextcloud.desktopclient.nextcloud \
      flathub com.google.Chrome \
      flathub io.github.shiftey.Desktop \
      flathub io.github.dvlv.boxbuddyrs \
      flathub com.github.tchx84.Flatseal \
      flathub io.github.flattool.Warehouse \
      flathub io.missioncenter.MissionCenter \
      flathub org.gnome.World.PikaBackup \
      flathub com.github.rafostar.Clapper \
      flathub com.mattjakeman.ExtensionManager \
      flathub com.jgraph.drawio.desktop \
      flathub org.adishatz.Screenshot \
      flatpak com.github.finefindus.eyedropper \
      flatpak com.github.johnfactotum.Foliate \
      flatpak com.usebottles.bottles \
      flatpak com.obsproject.Studio \
      flatpak net.lutris.Lutris \
      flatpak com.vivaldi.Vivaldi \
      flatpak com.vscodium.codium \
      flatpak io.podman_desktop.PodmanDesktop \
      flatpak org.kde.kdenlive

# Install Homebrew
    sudo dnf -y groupinstall \
      "Development Tools"

    /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
    echo 'eval "$(/home/linuxbrew/.linuxbrew/bin/brew shellenv)"' >> ~/.bash_profile
    eval "$(/home/linuxbrew/.linuxbrew/bin/brew shellenv)"

# Configure dnf for faster speeds
    sudo bash -c 'echo "fastestmirror=True" >> /etc/dnf/dnf.conf'
    sudo bash -c 'echo "max_parallel_downloads=10" >> /etc/dnf/dnf.conf'
    sudo bash -c 'echo "defaultyes=True" >> /etc/dnf/dnf.conf'
    sudo bash -c 'echo "keepcache=True" >> /etc/dnf/dnf.conf'

# Other software, updates, etc. 
    sudo dnf -y update
    sudo dnf install -y gnome-screenshot
    sudo dnf -y groupupdate core
    sudo flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo
    sudo dnf install -y wireguard-tools
    sudo dnf install gnome-tweaks
    sudo dnf -y install https://mirrors.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm \
    https://mirrors.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-$(rpm -E %fedora).noarch.rpm
    sudo dnf -y update
    sudo dnf install gnome-themes-extra
    gsettings set org.gnome.desktop.interface gtk-theme "Adwaita-dark"
    sudo dnf install -y go
    echo 'export PATH=$PATH:/usr/local/go/bin' >> ~/.bashrc
    source ~/.bashrc
    gsettings set org.gnome.mutter experimental-features "['scale-monitor-framebuffer']"

homebrew:
    brew install \
      chezmoi \
      hugo \
      virt-manager

Install Homebrew Stuff:

then run just homebrew after a reboot to install packages with brew

visudo config

Add to /etc/sudoers to make Vim default for visudo

Defaults editor=/usr/bin/vim

Virt Manager

sudo dnf install @virtualization
sudo vi /etc/libvirt/libvirtd.conf

Uncomment the line: unix_sock_group = "libvirt"

Adjust the UNIX socket permissions for the R/W socket: unix_sock_rw_perms = "0770"

Start the service: systemctl enable --now libvirtd

Add user to group:

sudo usermod -a -G libvirt $(whoami) && sudo usermod -a -G kvm $(whoami)

use the Tweaks app to set the appearance of Legacy Applications to ‘adwaita-dark’.

Configure Howdy

Howdy is a tool for using an IR webcam for authentication:

sudo dnf copr enable principis/howdy
sudo dnf --refresh install -y howdy

https://copr.fedorainfracloud.org/coprs/principis/howdy/ https://github.com/boltgolt/howdy

Seahorse

I was using this to fix the Login Keyring error that is common with Fedora, but it no longer works. sudo dnf -y install seahorse && seahorse

Applications > Passwords and Keys > Passwords > Right-click Login > Change Password to blank.

https://itsfoss.com/seahorse/

Initialize Chezmoi

Chezmoi let’s you easy sync your dotfiles with Github and your other computers. Just init Chezmoi and add your Github username. This assumes your dotfiles in Github are saved in the proper format. `chezmoi init –apply linuxreader

Add badname user (if needed)

If you need to use username with the format firstname.lastname, use the badname flag with the adduser command. You will have to create a normal user first, because you can’t do this during the initial install: $ adduser --badname firstname.lastname $ sudo usermod -aG wheel username

# uncomment this line in the visudo file 
$ sudo visudo
%wheel ALL=(ALL) ALL

Delete the other user: $ userdel username

Additional DNF stuff:

Clear cache (do this occasionally): sudo dnf clean dbcache or sudo dnf clean all

Update DNF: sudo dnf -y update

Additional DNF commands: https://docs.fedoraproject.org/en-US/fedora/latest/system-administrators-guide/package-management/DNF/

Set up RPM Fusion

RPM Fusion give you more accessibility to various software packages.

Install:

sudo dnf -y install https://mirrors.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm https://mirrors.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-$(rpm -E %fedora).noarch.rpm

https://rpmfusion.org/

AppStream metadata

Use AppStream to enable users to install packages using Gnome Software/KDE Discover:

sudo dnf -y groupupdate core

Flatpaks

To enable Flatpaks (this may no longer be needed:

flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo

https://flatpak.org/setup/Fedora

Set a hostname

Set a hostname for the system. This will show after next reboot: sudo hostnamectl set-hostname "New_Custom_Name"

Other Stuff

Here is some other stuff I install from the software center.

Input Leap

Input Leap let’s you share a mouse and keyboard between two workstations.

I don’t know what this is or why it is here: installing Git and a bunch of other stuff?

sudo dnf install git cmake make gcc-c++ xorg-x11-server-devel \
                 libcurl-devel avahi-compat-libdns_sd-devel \
                 libXtst-devel qt5-qtbase qt5-qtbase-devel  \
                 qt5-qttools-devel libICE-devel libSM-devel \
                 openssl-devel libXrandr-devel libXinerama-devel

Virtual machine Manager

The best way to manage VMs on desktop.

Box Buddy

For managing containers.

Warehouse

Managing installed applications.

Mission Center

Task Manager like application.

Pika Backup

For backing up your desktop.

Clapper

Video player.

And some extensions installed through Extension Manager:

Install Gnome Screenshot tool:

Install the extention: https://extensions.gnome.org/extension/1112/screenshot-tool/

You also need to install from DNF for some reason: dnf install -y gnome-screenshot

Airpods not pairing Issue

If you ever have the issue where Airpods won’t pair. Remove them from the pairing list, force them in pairing mode, and pair them back. This can be made easy with bluetoothctl:

Just in case, restart the bluetooth service:

sudo systemctl restart bluetooth
systemctl status bluetooth 

Show devices:

# bluetoothctl

[bluetooth] $ devices
Device 42:42:42:42:42:42 My AirPods <-- grab the name here
[bluetooth] $ remove 42:42:42:42:42:42 

Now, make sure your Airpods are in the charging case, close the lid, wait 15 seconds, then open the lid. Press and hold the setup button on the case for up to 10 seconds. The status light should flash white, which means that your Airpods are ready to connect.

Pair them back:

[bluetooth] $ pair 42:42:42:42:42:42

Enable Fractional Scaling

This lets you change display scaling in smaller increments. You’ll need to make sure Wayland is turned on.

Turn on the feature then reboot: gsettings set org.gnome.mutter experimental-features "['scale-monitor-framebuffer']"

reboot

Remove pasted characters ^

Kept having pasted format characters ^ mess up my groove. Here’s the fix.

Open your inputrc file and add the line below: vim ~/.inputrc

"\C-v": ""

Install gimp

sudo dnf install gimp enable in screenshot tool after install for gimp {f}

Install Arrows https://graphicdesign.stackexchange.com/questions/44797/how-do-i-insert-arrows-into-a-picture-in-gimp

Go to your home folder
Go to .config/GIMP
Go to the folder with a version number (2.10 for me)
Go to scripts
Download the arrow.scm file and place it here. Don't forget to unzip.
Open GIMP and draw a path

From Tools menu, select Arrow

= h.265 main 10 profile media codec error =

Distrobox

See distrobox

ZSH For Humans

GitHub - romkatv/zsh4humans: A turnkey configuration for Zsh

Install Starcraft on Fedora

https://www.youtube.com/watch?v=eefsL9K2w4k

  1. Install your latest gpu driver https://github.com/lutris/docs/blob/master/InstallingDrivers.md

I am just running off of built in AMD graphics. So we just need to install support for Vulkan API sudo dnf install vulkan-loader vulkan-loader.i686

Install Wine $ sudo dnf -y install wine

Install Lutris Install the Flatpak version in software center.

Fedora Hotkeys

Terminal

Close Terminal shift + c + q

Previous Tab c + Page Up

Next Tab c + Page Down

Move to Specific Tab Alt + #

Full Screen F11

New Window Shift + Ctrl + t

Close Tab Shift + Ctrl + w

Desktop

Run a command super + F2

Switch Between Applications Alt + Esc

Move Window to Left Monitor Shift + Super + <-

Move Window to Right Monitor Shift + Super + ->

Minimize Current Window Super + H

Close Current Appllication Ctrl + Q

Browser

Firefox

Switch Between Tabs Ctrl + Tab

Switch Between Tabs in Reverse Ctrl + Shift + Tab

Detach Tab Extension

https://addons.mozilla.org/en-US/firefox/addon/detach-tab/

Detach Tab Ctrl _ Shift _ Space

Reattach Tab Ctrl + Shift + v

Slack

Installing via package manager because of screen sharing issue.

Upgrade dnf and download the slack rpm from the website.

Screen Sharing in Slack:

vim /usr/share/applications/slack.desktop

Update the exec line to:

Exec=/usr/bin/slack --enable-features=WebRTCPipeWireCapturer %U

Actual Budget

https://github.com/actualbudget/actual

Set Vim to default editor for visudo

add Defaults editor=/usr/bin/vim to top of visudo file.su

Silverblue

Automated setup https://universal-blue.org/

You get all the benefits of using containers Separates system level packages from applications.

System Level

  • Desktop, kernel?
  • Layering
    • Apps at system level because containers aren’t as developed yet
    • Locks to the fedora version you are on

Layered package examples

- gnome shell extensions
- distrobox

Uses rpm-ostree? https://coreos.github.io/rpm-ostree/administrator-handbook/

Flatpacks

Remove fedora flatpack stuff and use flathub repos instead https://flatpak.org/setup/Fedora

Systemd unit for automatic flatpack updates

Update every 4 hours to mirror ubuntu

flatseal adjust permissions of flatpacks

check out apps.gnome.org

Rebase into Universal Blue

Rebase onto the “unsigned” image then reboot: rpm-ostree rebase ostree-unverified-registry:ghcr.io/ublue-os/silverblue-main:39 and

Then the signed image and reboot: rpm-ostree rebase ostree-image-signed:docker://ghcr.io/ublue-os/silverblue-main:39

Then do we you do after install, open the app store and install stuff via GUI or we

just

have a .justfile to install all flatpak/ homebrew packages https://universal-blue.discourse.group/t/introduction-to-just/42 https://just.systems/man/en/chapter_1.html

My justfile

import "/usr/share/ublue-os/justfile"  
# You can add your own commands here! For documentation, see: [https://ublue.it/guide/just/](https://ublue.it/guide/just/)  
  
first-install:  
    flatpak install \  
      flathub com.bitwarden.desktop \  
      flathub com.brave.Browser \  
      flathub com.discordapp.Discord \  
      flathub net.cozic.joplin_desktop \  
      flathub org.gimp.GIMP \  
      flathub org.gnome.Snapshot \  
      flathub org.libreoffice.LibreOffice \  
      flathub org.remmina.Remmina \  
      flathub com.termius.Termius \  
      flathub net.devolutions.RDM \  
      flathub com.slack.Slack \  
      flathub org.keepassxc.KeePassXC \  
      flathub md.obsidian.Obsidian \  
      flathub com.calibre_ebook.calibre \  
      flathub com.logseq.Logseq \  
      flathub org.mozilla.Thunderbird \  
      flathub us.zoom.Zoom \  
      flathub org.wireshark.Wireshark \  
      flathub com.nextcloud.desktopclient.nextcloud \  
      flathub com.google.Chrome  
  
    brew install \  
      ansible \  
      chezmoi \  
      neovim  \      
      onedrive \  
      wireguard-tools

Set up github dotfiles repo

Install chezmoi and initialize: chezmoi init

Sync with Chezmoi: https://www.chezmoi.io/quick-start/

Add dotfiles chezmoi add ~/.bashrc

Edit a dotfile chezmoi edit ~/.bashrc

See changes chezmoi diff

Apply changes chezmoi -v apply

to sync chezmoi with git:

chezmoi cd
git remote add origin https://github.com/$GITHUB_USERNAME/dotfiles.git 
$ git push -u origin main 
$ exit

For subsequent git pushes:

git commit -a -m "commit" && git push

From a second machine:

Install all dotfiles with a single command: chezmoi init --apply https://github.com/$GITHUB_USERNAME/dotfiles.git

If you use GitHub and your dotfiles repo is called dotfiles then this can be shortened to: $ chezmoi init --apply $GITHUB_USERNAME

See a list of full commands: chezmoi help

Or you can initialize and choose what you want: chezmoi init https://github.com/$GITHUB_USERNAME/dotfiles.git

See what changes are awaiting: chezmoi diff

Apply changes: chezmoi apply -v

can also edit a file before applying: chezmoi edit $FILE

Or merge the current file with new file: chezmoi merge $FILE

From any machine, you can pull and apply changes from your repo: chezmoi update -v

Add the justfile: chezmoi add .justfile

Install Connect Tunnel

Download from website Install java rpm-ostree install java

Runn the connect tunnel install script

Commands located in /var/usrlocal/Aventail must be ran as root `sudo ./startctui.sh

Subsections of Files

Advanced File Management

Permission Classes and Types

Permission classes

  • user (u)
  • group (g)
  • other (o) (public)
  • all (a) <- all combined

Permission types

  • r,w,x
  • works differently on files and directories
  • hyphen (-) represents no permissions set

ls results permissions groupings

    • rwx rw- r–
      • user (owner), group, and other (public)

ls results first character meaning

  • regular file d directory l symbolic link c character device file b block device file p named pipe s socket

Modifying Access Permission Bits

chmod command

  • Modify permissions using symbolic or octal notation.
  • Used by root or the file owner.

Flags chmod -v ::: Verbose.

Symbolic notation

  • Letters (ugo/rwx) and symbols (+, -, =) used to add, revoke, or assign permission bits.

Octal Notation

Three-digit numbering system ranging from 0 to 7. 0 — 1 –x 2 -w- 3 -wx 4 r– 5 r-x 6 rw- 7 rwx

Default Permissions

  • Calculated based on the umask (user mask) value subtracted from the initial permissions value.

umask

  • Three-digit value (octal or symbolic) that refers to read, write, and execute permissions for owner, group, and public.
  • Default umask value is 0022 for the root user and 0002 normal users.
  • The left-most 0 has no significance.
  • If umask is set to 000 files will get max of 666
  • If the initial permissions are 666 and the umask is 002 then the default permissions are 664. (666-002)
  • Any new files or directories created after changing the umask will have the new default permissions set.
  • umask settings are lost when you log off. Add it to the appropriate startup file to make it permanent.

Defaults

  • files 666 rw-rw-rw-
  • directories 777 rwxrwxrwx

umask command

Options

  • -S symbolic form

Special Permission Bits


  • 3 types of special permission bits for executable files or directories for non root users
    • setuid
    • setgid
    • sticky
  • setuid
    • set on exe’s to provide non-owners the ability to run them with the privileges of the owning user
    • may be set on directories and files but will have no effect.
    • example: the su command
    • shows an ’s’ in ls -l listing at the end of owners permissions
    • If the file already has the “x” bit set for the user, the long listing will show a lowercase “s”, otherwise it will list it with an uppercase “S”.
  • setgid
    • set on exe’s to provide non-group members the ability to run them with the privileges of the owning group.
    • May be set on shared directories
      • allow files and subdirectories created underneath to automatically inherit the directory’s owning group.
      • saves group members who are sharing the directory contents from changing the group ID for every new file and subdirectory that they add.
    • write command has this set by default so a member of the tty group can run it. If the file already has the “x” bit set for the group, the long listing will show a lowercase “s”, otherwise it will list it with an uppercase “S”.
  • Sticky bit
    • may be set on public directories for inhibiting file deletion by non-owners
    • may be set on directories and files but will have no effect.
    • Set on /tmp and /var/tmp by default
    • Letter “t” in other permission feild
    • If the directory already has the “x” bit set for public, the long listing will show a lowercase “t”, otherwise it will list it with an uppercase “T”.

Access Control Lists (ACLs)

  • Setting a default ACL on a directory allows content sharing among user’s without having to modify access on each new file and subdirectory.

  • Extra permissions that can be set on files and directories.

  • Define permissions for named user and named groups.

  • Configured the same way on both files and directories.

  • Named Users

    • May or may not be a part of the same group.
  • 2 different groups of ACLs. Default ACLs and Access ACLs.

    • Access ACLs
      • Set on individual files and directories
    • Default ACLs
      • Applied on directories
      • files and subdirectories inherit the ACL
      • Execute bit must be set on the directory for public.
      • Files receive the shared directory’s default ACLs as their access ACLs - what the mask limits.
      • Subdirectories receive both default ACLs and access ACLs as they are.
  • A “+” at the end of ls -l listing indicates ACL is set

    • -rw-rw-r–+

ACL Commands

getfacl

  • Display ACL settings
    • Displays:
    • name of file
    • owner
    • owning group
    • Permissions
      • colon characters save space for named user/group (or UID/GID) when extended Permissions are set.
      • Example: user:1000:r–
        • the named user with UID 1000, who is neither the file owner nor a member of the owning group, is allowed read-only access to this file.
      • Example: group:dba:rw-
        • give the named group (dba) read and write access to the file. setfacl
    • set, modify, substitute, or delete ACL settings
    • If you want to give read and write permissions to a specific user (user1) and change the mask to read-only at the same time, the setfacl command will allocate the permissions as mentioned; however, the effective permissions for the named user will only be read-only.

u:UID:perms

  • named user must exist in /etc/passwd
  • if no user specified, permissions are given to the owner of the file/directory

g:GID:perms

  • Named group must exist in /etc/group
  • If no group specified, permissions are given to the owning group of the file/directory

o:perms

  • Neither owner or owning group

m:perms

  • Maximum permissions for named user or named group

Switches

Switch Description
-b Remove all Access ACLs
-d Applies to default ACLs
-k Removes all default ACLs
-m Sets or modifies ACLs
-n Prevent auto mask recalculation
-R Apply Recursively to directory
-x Remove Access ACL
-c Display output without header

Mask Value

  • Determine maximum allowable permissions for named user or named group
  • Mask value displayed on separate line in getfacl output
  • Mask is recalculated every time an ACL is modified unless value is manually entered.
  • Overrides the set ACL value.

Find Command

  • Search files and display the full path.
  • Execute command on search results.
  • Different search criteria
    • name
    • part name
    • ownership
    • owning group
    • permissions
    • inode number
    • last access
    • modification time in days or minutes
    • size
    • file type
  • Command syntax
    • {find} + {path} + {search option} + {action}
  • Options
    • -name / -iname (search by name)
    • -user / -group (UID / GID)
    • -perm (permissions)
    • -inum (inode)
    • -atime/amin (access time)
    • -mtime/amin (modification time)
    • -size / -type (size / type)
  • Action
    • copy, erase, rename, change ownership, modify permissions
      • -exec {} \;
        • replaces {} for each filename as it is found. The semicolon character (;) marks the termination of the command and it is escaped with the backslash character (\).
      • -ok {} \;
        • same as exec but requires confirmation.
    • -delete
    • -print <- default

Advanced File Management Labs

Lab: find stuff

  1. Create file 10 and search for it.
[vagrant@server1 ~]$ sudo touch /root/file10
[vagrant@server1 ~]$ sudo find / -name file10 -print
/root/file10
  1. Perform a case insensitive search for files and directories in /dev that begin with “usb” followed by any characters.
[vagrant@server1 ~]$ find /dev -iname usb*
/dev/usbmon0
  1. Find files smaller than 1MB (-1M) in size (-size) in the root user’s home directory (~).
[vagrant@server1 etc]$ find ~ -size -1M
  1. Search for files larger than 40MB (+40M) in size (-size) in the /usr directory:
[vagrant@server1 etc]$ sudo find /usr -size +40M
/usr/share/GeoIP/GeoLite2-City.b
  1. Find files in the entire root file system (/) with ownership (-user) set to user daemon and owning group (-group) set to any group other than (-not or ! for negation) user1:
[vagrant@server1 etc]$ sudo find / -user daemon -not -group user1
  1. Search for directories (-type) by the name “src” (-name) in /usr at a maximum of two subdirectory levels below (-maxdepth):
[vagrant@server1 etc]$ sudo find /usr -maxdepth 2 -type d -name src
/usr/local/src
/usr/src
  1. Run the above search but at least three subdirectory levels beneath /usr, substitute -maxdepth 2 with -mindepth 3.
[vagrant@server1 etc]$ sudo find /usr -mindepth 3 -type d -name src
/usr/src/kernels/4.18.0-425.3.1.el8.x86_64/drivers/gpu/drm//display/dmub/src
/usr/src/kernels/4.18.0-425.3.1.el8.x86_64/tools/usb/usbip/src
  1. Find files in the /etc directory that were modified (-mtime) more than (the + sign) 2000 days ago:
[vagrant@server1 etc]$ sudo find /etc -mtime +2000
/etc/libuser.conf
/etc/xattr.conf
/etc/whois.conf
  1. Run the above search for files that were modified exactly 12 days ago, replace “+2000” with “12”.
[vagrant@server1 etc]$ sudo find /etc -mtime 12
  1. To find files in the /var/log directory that have been modified (-mmin) in the past (the - sign) 100 minutes:
[vagrant@server1 etc]$ sudo find /var/log -mmin -100
/var/log/rhsm/rhsmcertd.log
/var/log/rhsm/rhsm.log
/var/log/audit/audit.log
/var/log/dnf.librepo.log
/var/log/dnf.rpm.log
/var/log/sa
/var/log/sa/sa16
/var/log/sa/sar15
/var/log/dnf.log
/var/log/hawkey.log
/var/log/cron
/var/log/messages
/var/log/secure
  1. Run the above search for files that have been modified exactly 25 minutes ago, replace “-100” with “25”.
[vagrant@server1 etc]$ sudo find /var/log -mmin 25
  1. To search for block device files (-type) in the /dev directory with permissions (-perm) set to exactly 660:
[vagrant@server1 etc]$ sudo find /dev -type b -perm 660
/dev/dm-1
/dev/dm-0
/dev/sda2
/dev/sda1
/dev/sda
  1. Search for character device files (-type) in the /dev directory with at least (-222) world writable permissions (this example would ignore checking the write and execute permissions):
[vagrant@server1 etc]$ sudo find /dev -type c -perm -222
  1. Find files in the /etc/systemd directory that are executable by at least their owner or group members:
[vagrant@server1 etc]$ sudo find /etc/systemd -perm /110
  1. Search for symlinked files (-type) in /usr with permissions (-perm) set to read and write for the owner and owning group:
 sudo find /usr -type l -perm -ug=rw
  1. Search for directories in the entire directory tree (/) by the name “core” (-name) and list them (ls-ld) as they are discovered without prompting for user confirmation (-exec):
 [vagrant@server1 etc]$ sudo find / -name core -exec ls -ld {} \;
  1. Use the -ok switch to prompt for confirmation before it copies each matched file (-name) in /etc/sysconfig to /tmp:
 sudo find /etc/sysconfig -name '*.conf' -ok  cp {} /tmp \;

Lab: Display ACL and give permissions

  1. Create and empty file aclfile1 in /tmp and display ACLs on it:
 cd /tmp
 touch aclfile1
 getfacl aclfile1
  1. Give rw permission to user 1 but with a mask of read only and view the results.
 setfacl -m u:user1:rw,m:r aclfile1
  1. Promote the mask value to include write bit and verify:
 setfacl -m m:rw aclfile1
 getfacl -c aclfile1

Lab: Identify, Apply, and Erase Access ACLs

  1. Switch to user1 and create file acluser1 in /tmp:
 su - user1
 cd /tmp
 touch acluser1
  1. Use ls and getfacl to check existing acl entries:
 ls -l acluser1
 getfacl acluser1 -c
  1. Allocate rw permissions to user100 with setfacl in octal form:
 setfacl -m u:user100:6 acluser1
  1. Run ls (+) and getfacl to verify:
 ls -l acluser1
 getfacl -c acluser1
  1. Open another terminal as user100 and open the file and edit it.

  2. Add user200 with full rwx permissions to acluser1 using the symbolic notation and then show the updated ACL settings:

 setfacl -m u:user200:rwx acluser1
 getfacl -c acluser1
  1. Delete the ACL entries set for user200 and validate:
 setfacl -x u:user200 acluser1
 getfacl acluser1 -c
  1. Delete the rest of the ACLs:
 setfacl -b acluser1
  1. Use the ls and getfacl commands and confirm for the ACLs removal:
 ls -l acluser1
 getfacl acluser1 -c
  1. create group aclgroup1
 groupadd -g 8000 aclgroup1
  1. add this group as a named group along with the two named users (user100 and user200).

Lab: Apply, Identify, and erase default ACLs

  1. Switch or log in as user1 and create a directory projects in /tmp:
 su - user1
 cd /tmp
 mkdir projects
  1. Use the getfacl command for an initial look at the permissions on the directory:
 getfacl -c projects
  1. Allocate default read, write, and execute permissions to user100 and user200 on the directory. Use both octal and symbolic notations and the -d (default) option with the setfacl command.
 setfacl -dm u:user100:7,u:user200:rwx projects/
 getfacl -c projects/
  1. Create a subdirectory prjdir1 under projects and observe the ACL inheritance:
 mkdir prjdir1
 getfacl -c prjdir1
  1. Create a file prjfile1 under projects and observe the ACL inheritance:
 touch prjfile1
 getfacl -c prjfilel
  1. log in as one of the named users, change directory into /tmp/projects, and edit prjfile1 (add some random text). Then change into the prjdir1 and create file file100.
 su - user100
 cd /tmp/projects
 vim prjfile1
 ls -l prjfile1
 cd prjdir1
 touch file100
 pwd
  1. Delete all the default ACLs from the projects directory as user1 and confirm:
 exit
 su - user1
 cd /tmp
 setfacl -k projects
 getfacl -c projects
  1. create a group such as aclgroup2 by running groupadd -g 9000 aclgroup2 as the root user and repeat this exercise by adding this group as a named group along with the two named users (user100 and user200).

Lab: Modify Permission Bits Using Symbolic Form

  1. Add an execute bit for the owner and a write bit for group and public
 [vagrant@server1 ~]$ chmod u+x permfile1 -v
 mode of 'permfile1' changed from 0444 (r--r--r--) to 0544 (r-xr--r--)
 [vagrant@server1 ~]$ chmod -v go+w permfile1
 mode of 'permfile1' changed from 0544 (r-xr--r--) to 0566 (r-xrw-rw-)
  1. Revoke the write bit from public
 [vagrant@server1 ~]$ chmod -v o-w permfile1
 mode of 'permfile1' changed from 0566 (r-xrw-rw-) to 0564 (r-xrw-r--)
 [vagrant@server1 ~]$ chmod -v a=rwx permfile1
 mode of 'permfile1' changed from 0564 (r-xrw-r--) to 0777 (rwxrwxrwx)
  1. Revoke write from the owning group and write and execute bits from public.
 [vagrant@server1 ~]$ chmod g-w,o-wx permfile1 -v
 mode of 'permfile1' changed from 0777 (rwxrwxrwx) to 0754 (rwxr-xr--)

Lab: Modify Permission Bits Using Octal Form

  1. Read only for user, group, and other:
 [vagrant@server1 ~]$ touch permfile2
 [vagrant@server1 ~]$ chmod 444 permfile2
 [vagrant@server1 ~]$ ls -l permfile2
 -r--r--r--. 1 vagrant vagrant 0 Feb  4 12:22 permfile2
  1. Add an execute bit for the owner:
 [vagrant@server1 ~]$ chmod -v 544 permfile2
 mode of 'permfile2' changed from 0444 (r--r--r--) to 0544 (r-xr--r--)
  1. Add a write permission bit for group and public:
 [vagrant@server1 ~]$ chmod -v 566 permfile2
 mode of 'permfile2' changed from 0544 (r-xr--r--) to 0566 (r-xrw-rw-)
  1. Revoke the write bit for public:
 [vagrant@server1 ~]$ chmod -v 564 permfile2
 mode of 'permfile2' changed from 0566 (r-xrw-rw-) to 0564 (r-xrw-r--)
  1. Assign read, write, and execute permission bits to all three user categories:
 [vagrant@server1 ~]$ chmod -v 777 permfile2
 mode of 'permfile2' changed from 0564 (r-xrw-r--) to 0777 (rwxrwxrwx)
  1. Run the umask command without any options and it will display the current umask value in octal notation:
 [vagrant@server1 ~]$ umask
 0002
  1. Symbolic form
 [vagrant@server1 ~]$ umask -S
 u=rwx,g=rwx,o=rx
  1. Set all new files and directories to get 640 and 750 permissions,
 umask 027
 umask u=rwx,g=rx,o=
  1. Test new umask (666-027=640) (777-027=750)
 [vagrant@server1 ~]$ touch tempfile1
 [vagrant@server1 ~]$ ls -l tempfile1
 -rw-r-----. 1 vagrant vagrant 0 Feb  5 12:09 tempfile1
 [vagrant@server1 ~]$ mkdir tempdir1
 [vagrant@server1 ~]$ ls -ld tempdir1
 drwxr-x---. 2 vagrant vagrant 6 Feb  5 12:10 tempdir1

Lab: View suid bit on su command

 [vagrant@server1 ~]$ ls -l /usr/bin/su
 -rwsr-xr-x. 1 root root 50152 Aug 22 10:08 /usr/bin/su

Lab: Test the Effect of setuid Bit on Executable Files

  1. Open 2 terminal windows. Switch to user1 in terminal1
 [vagrant@server1 ~]$ su - user1
 Password:
 Last login: Sun Feb  5 12:37:12 UTC 2023 on pts/1
  1. Switch to root on terminal2
 sudo su - root
  1. T1 Revoke the setuid bit from /usr/bin/su
 chmod -v u-s /usr/bin/su
  1. T2 log off as root
 ctrl+d
  1. Try to log in has root from both terminals
 [user1@server1 ~]$ su - root
 Password:
 su: Authentication failure
  1. T1 restore the setuid bit
 [vagrant@server1 ~]$ sudo chmod -v +4000 /usr/bin/su
 mode of '/usr/bin/su' changed from 0755 (rwxr-xr-x) to 4755 (rwsr-xr-x)

Lab: Test the Effect of setgid Bit on Executable Files

  1. Log into two terminals T1 root T2 user1 Opened with ssh

  2. T2 list users currently logged in

who
  1. T2 send a message to root
write root
  1. T1 revoke setgid from /usr/bin/write
chmod g-s /usr/bin/write -v
  1. Try to write root
[user1@server1 ~]$ write root
write: effective gid does not match group of /dev/pts/0
  1. Restore the setgid bit on /usr/bin/write:
[root@server1 ~]# sudo chmod -v +2000 /usr/bin/write
mode of '/usr/bin/write' changed from 0755 (rwxr-xr-x) to 2755 (rwxr-sr-x)
  1. Test
write root

Lab: Set up Shared Directory for Group Collaboration

  1. set up 2 test users
 [root@server1 ~]# adduser user100
 [root@server1 ~]# adduser user200
  1. Add group sgrp with GID 9999 with the groupadd command:
 [root@server1 ~]# groupadd -g 9999 sgrp
  1. Add user100 and user200 as members to sgrp using the usermod command:
 [root@server1 ~]# usermod -aG sgrp user100
 [root@server1 ~]# usermod -aG sgrp user200
  1. Create /sdir directory
 [root@server1 ~]# mkdir /sdir
  1. Set ownership and owning group on /sdir to root and sgrp, using the chown command:
 [root@server1 ~]# chown root:sgrp /sdir
  1. Set the setgid bit on /sdir using the chmod command:
 [vagrant@server1 ~]$ sudo chmod g+s /sdir
  1. Add write permission to the group members on /sdir and revoke all permissions from public:
 [root@server1 ~]# chmod g+w,o-rx /sdir
  1. Verify
 [root@server1 ~]# ls -ld /sdir
 drwxrws---. 2 root sgrp 6 Feb 13 15:49 /sdir
  1. Switch or log in as user100 and change to the /sdir directory:
 [root@server1 ~]# su - user100
 [user100@server1 ~]$ cd /sdir
  1. Create a file and check the owner and owning group on it:
 [user100@server1 sdir]$ touch file100
 [user100@server1 sdir]$ ls -l file100
 -rw-rw-r--. 1 user100 sgrp 0 Feb 10 22:41 file100
  1. Log out as user100, and switch or log in as user200 and change to the /sdir directory:
 [root@server1 ~]# su - user200
 [user200@server1 ~]$ cd /sdir
  1. Create a file and check the owner and owning group on it:
 [user200@server1 sdir]$ touch file200
 [user200@server1 sdir]$ ls -l file200
 -rw-rw-r--. 1 user200 sgrp 0 Feb 13 16:01 file200

Lab: View “t” in permissions for sticky bit

 [user200@server1 sdir]$ ls -l /tmp /var/tmp -d
 drwxrwxrwt. 8 root root 185 Feb 13 16:12 /tmp
 drwxrwxrwt. 4 root root 113 Feb 13 16:00 /var/tmp

Lab: Test the effect of Sticky Bit

  1. Switch to user100 and change to the /tmp directory
[user100@server1 sdir]$ cd /tmp
  1. Create file called stckyfile
[user100@server1 tmp]$ touch stickyfile
  1. Try to delete the file as user200
[user200@server1 tmp]$ rm stickyfile
rm: remove write-protected regular empty file 'stickyfile'? y
rm: cannot remove 'stickyfile': Operation not permitted
  1. Revoke the /tmp stickybit and confirm
[vagrant@server1 ~]$ sudo chmod o-t /tmp
[vagrant@server1 ~]$ ls -ld /tmp
drwxrwxrwx. 8 root root 4096 Feb 13 22:00 /tmp
  1. Retry the removal as user200
rm stickyfile
  1. Restore the sticky bit on /tmp
sudo chmod -v +1000 /tmp

Lab: Manipulate File Permissions (user1)

  1. Create file file11 and directory dir11 in the home directory. Make a note of the permissions on them.
 touch file11
 mkdir dir11
  1. Run the umask command to determine the current umask.
 umask
  1. Change the umask value to 0035 using symbolic notation.
 umask g=r,0=w
  1. Create file22 and directory dir22 in the home directory.
 touch file22
 mkdir dir22
  1. Observe the permissions on file22 and dir22, and compare them with the permissions on file11 and dir11.
 ls -l
  1. Use the chmod command and modify the permissions on file11 to match those on file22.
 chmod g-w,o-r,o+w file11
  1. Use the chmod command and modify the permissions on dir22 to match those on dir11. Do not remove file11, file22, dir11, and dir22 yet.
 chmod g-wx,o-rx,o+w dir11

Lab: Configure Group Collaboration and Prevent File Deletion (root)

  1. create directory /sdir. Create group sgrp and create user1000 and user2000 and add them to the group:
 mkdir /sdir
 groupadd sgrp
 adduser user1000 && adduser user2000
 usermod -a -G sgrp user1000
 usermod -a -G sgrp user2000
  1. Set up appropriate ownership (root), owning group (sgrp), and permissions (rwx for group, — for public, s for group, and t for public) on the directory to support group collaboration and ensure non-owners cannot delete files.
 chgrp sgrp sdir
 chmod g=rwx,o=--- sdir
 chmod o+t sdir
 chmod g+s sdir
  1. Log on as user1000 and create a file under /sdir.
 su - user1000
 cd /sdir
 touch testfile
  1. Log on as user2000 and try to edit that file. You should be able to edit the file successfully.
 su - user200
 cd /sdir
 vim testfile
 cat testfile
  1. As user2000 try to delete the file. You should not be able to.
 rm testfile

Lab: Find Files (root)

  1. Search for all files in the entire directory structure that have been modified in the last 300 minutes and display their type.
 find /sdir -mtime -300 -exec file {} \;
  1. Search for named pipe and socket files.
 find / -type p
 find / -type s

Lab: Find Files Using Different Criteria (root)

  1. Search for regular files under /usr that were accessed more than 100 days ago, are not bigger than 5MB in size, and are owned by the user root.
 find /usr -type f -mtime +100 -size -5M -user root

Lab: Apply ACL Settings (root)

  1. Create file testfile under /tmp.
 touch /tmp/testfile
  1. Create users.
 adduser user2000
 adduser user3000
 adduser user4000
  1. Apply ACL settings on the file so that user2000 gets 7, user3000 gets 6, and user4000 gets 4 permissions.
 setfacl -m u:user2000:7 testfile
 setfacl -m u:user3000:6 testfile
 setfacl -m u:user4000:4 testfile
  1. Remove the ACLs for user2000, and verify.
 setfacl -x user2000 testfile
 getfacl testfile
  1. Erase all remaining ACLs at once, and confirm.
 setfacl -b testfile
 getfacl testfile

Basic File Managment

7 File types

  1. regular
  2. directory
  3. block special device
  4. character special device
  5. symbolic link
  6. named pipe
  7. socket

Commands

  • ls
  • stat
  • file

Regular files

  • Text or binary data.
  • Represented by hyphen (-).

Directory Files

  • Identified by the letter “d” in the beginning of ls output.

Block and Character (raw) Special Device Files

  • All hardware has device file in /dev/.
  • Used by system to communicate with device.
  • Identified by “c” or “b” in ls listing.
  • Each device driver is assigned a unique number called the major number
  • Character device
    • Reads and writes 8 bits at a time.
    • Serial
  • Block device
    • Receives data in fixed block size determined by drivers
    • 512 or 4096 bytes

Major Number

  • Used by kernel to recognize device driver type.
  • Column 5 of ls listing.
ls -l /dev/sda

Minor Number

  • Each device controlled by the same device driver gets a Minor Number
  • Applies to disk partitions as well.
  • The same driver can control multiple devices of the same type.
  • Column 6 of ls listing
ls -l /dev/sda
  • Shortcut to another file or directory.
  • Begins with “l” in ls listing.
ls -l /usr/sbin/vigr
lrwxrwxrwx. 1 root root 4 Jul 21 14:36 /usr/sbin/vigr -> vipw

Compression and Archiving

Archiving

  • Preserves file attributes such as ownership, owning group, and timestamp.
  • Preserves extended file attributes such as ACLs and SELinux contexts.
  • Syntax of tar and star are identical.

star command

tar (tape archive) command

  • Create, append, update, list, and extract files/directory tree to/from a file called a tarball(tarfile)
  • Can compress a tarball after it’s been created.
  • Automatically removes “/” so you do not have to specify the full pathname  when restoring files at any location.

flags tar -c :: Create tarball. tar -f :: Specify tarball name. tar -p :: Preserve file permissions. Default for the root user. Specify this if you create an archive as a normal user. tar -r :: Append files to the end of an existing uncompressed tarball. tar -t :: List contents of a tarball. tar -u :: Append files to the end of an existing uncompressed tarball provided the specified files being added are newer. -z -j -C

Archive entire home directory:

tar -cvf /tmp/home.tar /home

Archive two specific files:

tar -cvf /tmp/files.tar /etc/passwd /etc/yum.conf

Append files in a directory to existing tarball:

tar -rvf /tmp/files.tar /etc/yum.repos.d

List what is included in home.tar tarball:

tar -tvf /tmp/files.tar

Restore single file and confirm:

tar -xf /tmp/files.tar etc/yum.conf
ls -l etc/yum.conf

Restore all files and confirm:

tar -xf /tmp/files.tar
ls

Create a gzip-compressed tarball under /tmp for /home:

tar -czf /tmp/home.tar.gz /home

Create bzip2-compressed tarball under /tmp for /home:

sudo tar -cjf /tmp/home.tar.bz2 /home

List content of gzip-compressed archive without uncompressing it:

tar -tf /tmp/home.tar.gz

Extract files from gzip-compressed tarball in the current directory:

tar -xf /tmp/home.tar.gz

Extract files from the bzip2-compressed tarball under /tmp:

tar -xf /tmp/home.tar.bz2 -C /tmp

Compression tools

gzip (gunzip) command

  • Create a compressed file for each of the specified files.
  • Adds .gz extension.

Flags

Copy /etc/fstab to the current directory and display filename when uncompressed:

cp /etc/fstab .
ls -l fstab

gzip fstab and view details:

gzip fstab
ls -l fstab.gz

Display compression info:

gzip -l fstab.gz

Uncompress fstab.gz:

gunzip fstab.gz
ls -l fstab

bzip2 (bunzip2) command

  • Adds .bz2 extension.
  • Better compression/ decompression ratio but is slower than gzip.

Compress fstab using bzip and view details:

bzip2 fstab
ls -l fstab.bz2

Unzip fstab.bz2 and view details:

bunzip2 fstab.bz2
ls -l fstab

File Editing

Vim

vimguide

File and Directory Operations

touch command

  • File is created with 0 bytes in size.
  • Run touch on it and it will get a new timestamp

Flags

Set date on file1 to 2019-09-20:

touch -d 2019-09-20 file1

Change modification time on file1 to current system time:

touch -m file1

mkdir command

  • Create a new directory.

flags

Create dir1 verbosely:

mkdir dir1 -v

Create dir2/perl/perl5:

mkdir -vp dir2/perl/perl5

Commands for displaying file contents

  • cat
  • more
  • less
  • head
  • tail

cat command

  • Concatenate and print files to standard output.

Flags

Redirect output to specified file:

cat > catfile1

tac command

  • Display file contents in reverse

more command

  • Display files on page-by-page basis.
  • Forward text searching only.

less command

  • Display files on page-by-page basis.
  • Forward and backwards searching.
less /usr/bin/znew

head command

  • Displays first 10 lines of a file.
head /etc/profile

View top 3 lines of a file:

head -3 /etc/profile

tail command

  • Display last 10 lines of a file.

Flags

tail /etc/profile

View last 3 lines of /etc/profile:

tail -3 /etc/profile

View updates to the system log file /varlog/messages in real time:

sudo tail -f /var/log/messages

Counting Words, Lines, and Characters in Text Files

wc (word count) command

  • Display the number of lines, words, and characters (or bytes) contained in a text file or input supplied.

Flags

 wc /etc/profile
  85  294 2123 /etc/profile

Display count of characters on /etc/profile:

wc -m /etc/profile

Copying Files and Directories

cp command

  • Copy files or directories.
  • Overwrites destination without warning.
  • root has a custom alias in their .bashrc file that automatically adds the -i option.
alias cp='cp -i'

Flags

cp file1 newfile1

Copy file to new directory:

cp file1 dir1

Get confirmation before overwriting:

cp file1 dir1 -i
cp: overwrite 'dir1/file1'? y

Copy a directory and view hierarchy:

cp -r dir1 dir2
ls -l dir2 -R

Copy file while preserving attributes:

cp -p file1 /tmp

Moving and renaming Files and Directories

mv command

  • Move or rename files and directories.
  • Can move a directory into another directory.
    • Target directory must exist otherwise you are just renaming the directory.
  • Alias exists in root’s home directory for -i in the .bashrc file.
alias—“alias mv=’mv -i’""

Flags

mv -i file1 dir1
mv newfile1 newfile2

Move a dir into another dir (target exists):

mv dir1 dir2

Rename a directory (Target does not exist):

mv dir2 dir20

Removing files

rm command

  • Delete one or more specified files or directories.
  • Alias—“alias rm=’rm -i’”— in the .bashrc file in the root user’s home directory.
  • Remember to backslash “" any wildcard characters in filenames.

Flags

Erase newfile2:

rm -i newfile2

rm a directory:

 rm -dv emptydir

rm a directory recursively:

rm -r dir20

rmdir command

  • Remove empty directories.

Flags

rmdir emptydir -v

File Linking

inode (index node)

  • Contains metadata about a file (128 bytes)
    • File type, Size, permissions, owner name, owning group, access times, link count, etc.
    • Also shows number of allocated blocks and pointers to the data storage location.
  • Assigned a unique numeric identifier that is used by the kernel for accessing, tracking, and managing the file.
  • Does not store the filename.
  • Filename and corresponding inode number mapping is maintained in the directory’s metadata where the file resides.
  • Links are not created between files and directories
  • Mapping between one or more filenames and an inode number.
  • Hard-linked files are indistinguishable from one another.
  • All hard-linked files will have identical metadata.
  • Changes to the file metadata and content can be made by accessing any of the filenames.
  • Cannot cross file system boundaries.
  • Cannot link directories.

ls -li output

  • Column 1 inode number.
  • Column 3 link count.
  • Symbolic (symlink).
  • Like a Windows shortcut.
  • Unique inode number for each symlink.
  • Link count does not increase or decrease.
  • Size of soft link is the number of character in pathname to target.
  • Can cross file system boundaries.
  • Can link directories.
  • ls-l shows l at the beginning of the permissions for soft link
  • if you remove the original file, the softlink will point to a file that doesn’t exist.
  • RHEL 8 has four soft-linked directories under /.
    1. bin -> usr/bin
    2. lib -> usr/lib
    3. lib64 ->usr/lib64
    4. sbin -> usr/sbin
  • Same syntax for creating linked directories

ln command

  • Create links between files.
  • Creates hard link by default.
touch file10
ln file10 file20
ls -li
ln -s file10 soft10

Copying vs linking

Copying

  • Duplicates source file.
  • Each copy stores data at a unique location.
  • Each copied file has a unique inode number and unique metadata.
  • If a copy is moved, erased, or renamed, the source file will have no impact, and vice versa.
  • Copy is used when the data needs to be edited independent of the other.
  • Permissions on the source and the copy are managed independent of each other.

Linking

  • Creates a shortcut that points to the source file.
  • Source can be accessed or modified using either the source file or the link.
  • All linked files point to the same data.
  • Hard Link: All hard-linked files share the same inode number, and hence the metadata.
  • Symlink: Each symlinked file has a unique inode number, but the inode number stores only the pathname to the source.
  • Hard Link: If the hard link is weeded out, the other file and the data will remain untouched.
  • Symlink: If the source is deleted, the soft link will be broken and become meaningless. If the soft link is removed, the source will have no impact.
  • Links are used when access to the same source is required from multiple locations.
  • Permissions are managed on the source file.

Labs

Lab: Viewing regular file information:

touch file1
ls -l
file file1
stat file1
  1. Create an empty file /tmp/hard1, and display the long file listing including the inode number:
touch /tmp/hard1
ls -li /tmp/hard1
  1. Create two hard links called hard2 and hard3 under /tmp, and display the long listing:
ln /tmp/hard1 /tmp/hard2
ln /tmp/hard1 /tmp/hard3
ls -li /tmp/hard*
  1. Edit file hard2 and add some random text. Display the long listing for all three files again:
vim /tmp/hard2
ls -li /tmp/hard*
  1. Erase file hard1 and hard3, and display the long listing for the remaining file:
rm -f /tmp/hard1 /tmp/hard3
ls -li /tmp/hard*
  1. Create soft link /root/soft1 pointing to /tmp/hard2, and display the long file listing for both:
sudo ln -s /tmp/hard2 /root/soft1
ls -li /tmp/hard2 /root/soft1
sudo ls -li /tmp/hard2 /root/soft1

2.Edit soft1 and display the long listing again:

sudo vim /root/soft1
sudo ls -li /tmp/hard2 /root/soft1

3.Remove hard2 and display the long listing:

sudo ls -li /tmp/hard2 /root/soft1

remove the soft link

rm -f /root/soft1.

Lab: Archive, List, and Restore Files

Create a gzip-compressed archive of the /etc directory.

tar -czf etc.tar.gz /etc

Create a bzip2-compressed archive of the /etc directory.

sudo tar -cjf etc.tar.bz2 /etc

Compare the file sizes of the two archives.

ls -l etc*

Run the tar command and uncompress and restore both archives without specifying the compression tool used.

sudo tar -xf etc.tar.bz2 ; sudo tar -xf etc.tar.gz

Lab: Practice the vim Editor

As user1 on server1, create a file called vipractice in the home directory using vim. Type (do not copy and paste) each sentence from Lab 3-1 on a separate line (do not worry about line wrapping). Save the file and quit the editor.

Open vipractice in vim again and reveal line numbering. Copy lines 2 and 3 to the end of the file to make the total number of lines in the file to 6.

:set number!
#then
yy and p

Move line 3 to make it line 1.

3m0

Go to the last line and append the contents of the .bash_profile.

:r ~/.bashrc

Substitute all occurrences of the string “Profile” with “Pro File”, and all occurrences of the string “profile” with “pro file”.

:%s/profile/pro file/gi

Erase lines 5 to 8.

:5,8d

Provide a count of lines, words, and characters in the vipractice file using the wc command.

wc vipractice

Lab: File and Directory Operations

As user1 on server1, create one file and one directory in the home directory.

touch file3
mkdir dir5

List the file and directory and observe the permissions, ownership, and owning group.

ls -l file3
ls -l dir5
ls -ld dir5

Try to move the file and the directory to the /var/log directory and notice what happens.

mv dir5 /var/log
mv file3 /var/log

Try again to move them to the /tmp directory.

mv dir5 /tmp
ls /tmp

Duplicate the file with the cp command, and then rename the duplicated file using any name.

cp /tmp/file3 file4
ls /tmp
ls

Erase the file and directory created for this lab.

rm -d /tmp/dir5; rm file4

Networking

Subsections of Networking

Consoling in to MX80 from linux

Plug console cable in

find out what your serial line name is:

$ dmesg | grep -i FTDI

Open putty > change to serial > change the tty line name

Make sure your serial settings are correct

https://www.juniper.net/documentation/us/en/hardware/mx5-mx10-mx40-mx80/topics/task/management-devices-mx5-mx10-mx40-mx80-connecting.html

Press open > when terminal appears press enter

Juniper Password recovery

ttps://www.juniper.net/documentation/en_US/junos/topics/task/configuration/authentication-root-password-recovering-mx80.html

https://www.juniper.net/documentation/us/en/software/junos/junos-install-upgrade/topics/topic-map/rescue-and-recovery-config-file.html#load-commit-configuration

accidentally deleted the wrong line in Juniper.conf file ? failing over to juniper.conf

https://www.juniper.net/documentation/en_US/junos/topics/concept/junos-configuration-files.html

DNS

DNS and Name Resolution

  • DNS is also referred to as BIND (Berkeley Internet Name Domain)
    • An implementation of DNS,
    • Most popular DNS application in use.
    • Name resolution is the technique that uses DNS/BIND for hostname lookups.

DNS Name Space and Domains

  • DNS name space is a
    • Hierarchical organization of all the domains on the Internet.
    • Root of the name space is represented by a period (.)
    • Hierarchy below the root (.) denotes the top-level domains (TLDs) with names such as .com, .net, .edu, .org, .gov, .ca, and .de.
    • A DNS domain is a collection of one or more systems. Subdomains fall under their parent domains and are separated by a period (.). root of the name space is represented by a period ( - redhat.com is a second-level subdomain that falls under .com, and bugzilla.redhat.com is a third-level subdomain that falls under redhat.com.

  • Deepest level of the hierarchy are the leaves (systems, nodes, or any device with an IP address) of the name space.
    • a network switch net01 in .travel.gc.ca subdomain will be known as net01.travel.gc.ca.
    • If a period (.) is added to the end of this name to look like net01.travel.gc.ca., it will be referred to as the Fully Qualified Domain Name (FQDN) for net01.

DNS Roles

A DNS system or nameserver can be a

  • primary server
  • secondary server
  • or client

Primary server

  • Responsible for its domain (or subdomain).
  • Maintains a master database of all the hostnames and their associated IP addresses that are included in that domain.
  • All changes in the database are done on this server.
  • Each domain must have one primary server with one or more optional secondary servers for load balancing and redundancy.

Secondary server

  • Stores an updated copy of the master database.
  • Provide name resolution service in the event the primary server goes down.

Client

  • Queries nameservers for name lookups.
  • DNS client on Linux involves two text files.
    • /etc/resolv.conf

/etc/resolv.conf

  • DNS resolver configuration file where information to support hostname lookups is defined.
  • May be edited manually with a text editor.
  • Referenced by resolver utilities to construct and transmit queries.

Key directives

  • domain

  • nameserver

  • search

    Directive Description

domain

  • Identifies the default domain name to be searched for queries

nameserver

  • Declares up to three DNS server IP addresses to be queried one at a time in the order in which they are listed. Nameserver entries may be defined as separate line items with the directive or on a single line.

search

  • Specifies up to six domain names, of which the first must be the local domain. No need to define the domain directive if the search directive is used.

Sample entry

  domain                              example.com

  search                              example.net example.org example.edu example.gov

  nameserver                          192.168.0.1 8.8.8.8 8.8.4.4

Variation

  domain                              example.com

  search                              example.net example.org example.edu example.gov

  nameserver                          192.168.0.1

  nameserver                          8.8.8.8

  nameserver                          8.8.4.4
  • Entries are automatically placed by the NetworkManager service.
[root@server30 tmp]# cat /etc/resolv.conf
# Generated by NetworkManager
nameserver 2001:578:3f::30
nameserver 2001:578:3f:1::30
  • If this file is absent, the resolver utilities only query the nameserver configured on the localhost, determine the domain name from the hostname of the system, and construct the search path based on the domain name.

Viewing and Adjusting Name Resolution Sources and Order

/etc/nsswitch.conf

  • Directs the lookup utilities to the correct source to get hostname information.

  • Also identifies the order in which to consult source and an action to be taken next.

  • Four keywords oversee this behavior

    • success
    • notfoundq
    • unavail
    • tryagain

    Keyword Meaning Default Action

success

  • Information found in return (do not try the source and provided to next source) the requester.

notfound

  • Information not found continue (try the next in source source).

unavail

  • Source down or not continue (try the next responding; service source) disabled or not configured.

tryagain

  • Source busy, retry continue (try the next later source).

Example shows two sources for name resolution: files (/etc/hosts) and DNS (/etc/resolv.conf).

hosts:files    dns
  • Default behavior
  • Search will terminate if the requested information is found in the hosts table.

Instruct the lookup programs to return if the requested information is not found there:

hosts:files [notfound=return] dns
  • Query tools available in RHEL 9:
    • dig
    • host
    • nslookup
    • getent

dig command (domain information groper)

  • DNS lookup utility.
  • Queries the nameserver specified at the command line or consults the resolv.conf file to determine the nameservers to be queried.
  • May be used to troubleshoot DNS issues due to its flexibility and verbosity.

To get the IP for redhat.com using the nameserver listed in the resolv.conf file:

[root@server10 ~]# dig redhat.com

; <<>> DiG 9.16.23-RH <<>> redhat.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 9017
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4000
;; QUESTION SECTION:
;redhat.com.			IN	A

;; ANSWER SECTION:
redhat.com.		3599	IN	A	52.200.142.250
redhat.com.		3599	IN	A	34.235.198.240

;; Query time: 94 msec
;; SERVER: 172.16.10.150#53(172.16.10.150)
;; WHEN: Fri Jul 19 13:12:13 MST 2024
;; MSG SIZE  rcvd: 71

To perform a reverse lookup on the redhat.com IP (52.200.142.250), use the -x option with the command:

[root@server10 ~]# dig -x 52.200.142.250

; <<>> DiG 9.16.23-RH <<>> -x 52.200.142.250
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 23057
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4000
;; QUESTION SECTION:
;250.142.200.52.in-addr.arpa.	IN	PTR

;; ANSWER SECTION:
250.142.200.52.in-addr.arpa. 299 IN	PTR	ec2-52-200-142-250.compute-1.amazonaws.com.

;; Query time: 421 msec
;; SERVER: 172.16.10.150#53(172.16.10.150)
;; WHEN: Fri Jul 19 14:22:52 MST 2024
;; MSG SIZE  rcvd: 112

host Command

  • Works on the same principles as the dig command in terms of nameserver determination.
  • Produces less data in the output by default.
  • -v option if you want more info.

Perform a lookup on redhat.com:

[root@server10 ~]# host redhat.com
redhat.com has address 34.235.198.240
redhat.com has address 52.200.142.250
redhat.com mail is handled by 10 us-smtp-inbound-2.mimecast.com.
redhat.com mail is handled by 10 us-smtp-inbound-1.mimecast.com.

Rerun with -v added:

[root@server10 ~]# host -v redhat.com
Trying "redhat.com"
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 28687
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;redhat.com.			IN	A

;; ANSWER SECTION:
redhat.com.		3127	IN	A	52.200.142.250
redhat.com.		3127	IN	A	34.235.198.240

Received 60 bytes from 172.16.1.19#53 in 8 ms
Trying "redhat.com"
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 47268
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0

;; QUESTION SECTION:
;redhat.com.			IN	AAAA

;; AUTHORITY SECTION:
redhat.com.		869	IN	SOA	dns1.p01.nsone.net. hostmaster.nsone.net. 1684376201 200 7200 1209600 3600

Received 93 bytes from 172.16.1.19#53 in 5 ms
Trying "redhat.com"
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 61563
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 12

;; QUESTION SECTION:
;redhat.com.			IN	MX

;; ANSWER SECTION:
redhat.com.		3570	IN	MX	10 us-smtp-inbound-1.mimecast.com.
redhat.com.		3570	IN	MX	10 us-smtp-inbound-2.mimecast.com.

;; ADDITIONAL SECTION:
us-smtp-inbound-1.mimecast.com.	270 IN	A	205.139.110.242
us-smtp-inbound-1.mimecast.com.	270 IN	A	170.10.128.242
us-smtp-inbound-1.mimecast.com.	270 IN	A	170.10.128.221
us-smtp-inbound-1.mimecast.com.	270 IN	A	170.10.128.141
us-smtp-inbound-1.mimecast.com.	270 IN	A	205.139.110.221
us-smtp-inbound-1.mimecast.com.	270 IN	A	205.139.110.141
us-smtp-inbound-2.mimecast.com.	270 IN	A	170.10.128.221
us-smtp-inbound-2.mimecast.com.	270 IN	A	205.139.110.141
us-smtp-inbound-2.mimecast.com.	270 IN	A	205.139.110.221
us-smtp-inbound-2.mimecast.com.	270 IN	A	205.139.110.242
us-smtp-inbound-2.mimecast.com.	270 IN	A	170.10.128.141
us-smtp-inbound-2.mimecast.com.	270 IN	A	170.10.128.242

Received 297 bytes from 172.16.10.150#53 in 12 ms

Perform a reverse lookup on the IP of redhat.com with verbosity:

[root@server10 ~]# host -v 52.200.142.250
Trying "250.142.200.52.in-addr.arpa"
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 62219
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;250.142.200.52.in-addr.arpa.	IN	PTR

;; ANSWER SECTION:
250.142.200.52.in-addr.arpa. 300 IN	PTR	ec2-52-200-142-250.compute-1.amazonaws.com.

Received 101 bytes from 172.16.10.150#53 in 430 ms

nslookup Command

  • Queries the nameservers listed in the resolv.conf file or specified at the command line.
  • See man pages for interactive mode

Get the IP for redhat.com using nameserver 8.8.8.8 instead of the nameserver defined in resolv.conf:

[root@server10 ~]# nslookup redhat.com 8.8.8.8
Server:		8.8.8.8
Address:	8.8.8.8#53

Non-authoritative answer:
Name:	redhat.com
Address: 34.235.198.240
Name:	redhat.com
Address: 52.200.142.250

Perform a reverse lookup on the IP of redhat.com using the nameserver from the resolver configuration file:

[root@server10 ~]# nslookup 52.200.142.250
250.142.200.52.in-addr.arpa	name = ec2-52-200-142-250.compute-1.amazonaws.com.

Authoritative answers can be found from:

getent Command

  • Fetch matching entries from the databases defined in the nsswitch.conf file.
  • Reads the corresponding database and displays the information if found.
  • For name resolution, use the hosts database and getent will attempt to resolve the specified hostname or IP address.

Run the following for forward and reverse lookups:

[root@server10 ~]# getent hosts redhat.com
34.235.198.240  redhat.com
52.200.142.250  redhat.com
[root@server10 ~]# getent hosts 34.235.198.240
34.235.198.240  ec2-34-235-198-240.compute-1.amazonaws.com

Hostname

  • “-”, “_ “, and “. " characters are allowed.
  • Up to 253 characters.
  • Stored in /etc/hostname.
  • Can be viewed with several different commands, such as hostname, hostnamectl, uname, and nmcli, as well as by displaying the content of the /etc/hostname file.

View the hostname:

hostnamectl --static
hostname
uname -n
cat /etc/hostname

Lab: Change the Hostname

Server1

  1. Open /etc/hostname and change the entry to server10.example.com
  2. restart the systemd-hostnamed service daemon
sudo systemctl restart systemd-hostnamed
  1. confirm
hostname

server2

  1. Change the hostname with hostnamectl:
sudo hostnamectl set-hostname server21.example.com
  1. Log out and back in for the prompt to update

  2. Change the hostname using nmcli

nmcli general hostname server20.example.com

How to Study for the CCNA Exam

CCNA Study Calendar CCNA Study Calendar

It took me a whopping 2 years to finish my CCNA! I kept giving up and quitting my studies for months at a time. Why? Because I couldn’t remember the massive amount of content covered in the CCNA. It felt hopeless. I could have done it in 6 month (or faster) if I knew how to study.

I hadn’t taken a test in 10 years before this. So I had completely forgotten how to learn. This post is about the mistakes I made studying for the CCNA and how to avoid them.

You will also learn, as I did, about spaced repetition. I’ve also included a 6 month CCNA spaced repetition calendar.

My Mistakes, So You Don’t Make Them

Mistake #1 Didn’t start flashcards until the final 30 days

I wish I would have started flashcards from day 1. This would have helped a crap ton. Remembering all of the little details is not only useful for taking the test. It embeds the concepts in your brain and keeps you processing how things work .

If there is anything you take from this list. You should definitely be doing some flashcards every day.

Mistake #2 Not enough labs as I went.

While studying the OCG and video courses. I did some labs. But I also skipped a ton of labs because it wasn’t convenient at the time. Then I was forced to lab every single topic in the final 30 days. A lot of cramming was done..

Make sure to do all of the labs as you go. Make up your own labs as well. This is very important to building job worthy skills.

Mistake #3 Didn’t have a plan or stick with it.

When your plan consists of, “just read everything and watch the videos and take the test when you feel ready”, you tend to procrastinate and put things off. Make a study schedule and a solid plan. (See below)

Having a set date for when you will take the test was pretty motivating. I did not find this out until about 30 days until my test.


Spaced Repetition

If you are using Anki flashcards for your studies, you may already be using spaced repetition. Spaced repetition is repeatedly reviewing with the time in between reviews getting longer each time you review it.

Here is an excellent article about our learning curves and why spaced repetition helps us remember things https://fs.blog/spacing-effect/

How to set up a spaced repetition calendar for CCNA.

Step 1. Plan how long your studies will take

Figure out how long you need. It usually takes around 240 hours of studying for CCNA. (Depending on experience). Then figure out how many hours per day that you can spend on studying. This example is based on a 6 month study calendar.

You can use this 6 month excel calendar to plan and track your progress. You. can still use this method If you have already been studying CCNA. Just edit your calendar for how much time you have left.

The calendar is also based on Wendel Odom’s Official Cert Guide. You will also want to mix your other resources into your reviews.

Decide what your review sessions will be

Plan to review each chapter 3-4 times. here is what I did for review sessions to pass the exam.

Review 1 Read and highlight (and flashcards)

  • Read the chapter. Highlight key information that you want to remember.
  • Do a lab for the material you studied (if applicable)
  • Answer DIKTA questions
  • Start Chapter 1 Anki Flascards

Review 2 Copy highlights over to OneNote (keep doing flashcards)

  • Copy your highlights over to OneNote. (using copy and paste if you have the digital book)
  • Read your highlights and make sure you understand everything.
  • lab and continue doing flashcards. (just go through Anki suggested flashcards, not just ones for the specific chapter.)

Review 3 Labs and Highlight your notes (and flashcards)

  • More labs!
  • Go over your notes. Color coding everything. (You can find my jumbled note mess here)
  • Green: Read again
  • Teal: Very important Learn this/ lab it.
  • Red/ purple: make extra flashcards out of this.

Review 4 Practice questions and review

  • Go through and answer the DIKTA questions again. Review any missed answers.
  • Lab anything you aren’t quite sure of.

The final 30 days

I HIGHLY recommend Boson ExSim for your final 30 days of studying. ExSim comes with 3 exams (A,B, and C). Start with exam A in test simulation mode. Leave about a week in between each practice exam so you can go over your answers and Boson’s explanations for each answer.

One week before your test, (after you’ve completed exams A,B, and C). Do a random exam. Make sure you do the timed version that don’t show your score as you go.

You should be scoring above 900 by your 3rd and 4th exam if you have been reviewing Boson’s answer explanations.

Schedule your exam

Pearson view didn’t let me schedule the exam past 30 days out from when I wanted to take it. I’m not sure if this is the case all the time. But by the time you are 30 days out you should have your test scheduled. This will light the fire under you. Great motivation for the home stretch.

If your exam is around June during Cisco Live, Cisco usually offers a 50% discount for an exam voucher. You probably won’t find any other discounts unless you pay for Cisco’s specific CCNA training.

Final word on labs

You can technically pass the CCNA without doing many labs. But this will leave you at a HUGE disadvantage in the job market. Labs are crucial for really understanding networking. Knowing your way around the CLI and being able to troubleshoot networking issues will make you stand out from those who crammed for the exam.

If you’ve made it this far I really appreciate you taking the time to read this post. I really hope it helps at least one person.

Juniper CLI Basics

Connection Methods

Factory default login:

User: root
No password

Fxpo

Ethernet management interface

SSH, FTP. Telnet, http(s)

Cannot route traffic and is used for management purposes only.

Initial Login

Logging for the First Time
• Nonroot users are placed into the CLI automatically
• Root user SSH login requires explicit config
router (ttyu0)
Serial console
login :
user
Password:

  • JUNOS 15.1X49-DIOO.6 built 2017-06-28 07:33:31 UTC
    • The root user must start the CLI from the shell
    • Remember to exit the root shell after logging out of the CLI!
    router (ttyuO)
  • JUNOS 15.1X49-DIOO
    2017-06-28
    UTC
    login:
    Password:
    root@router>
    .6 built
    cli
    Shell Prompt
    CLI Prompt “>

CLI Modes

configure

Configure mode . New candidate config file

configure private (best practice)

Configure mode with a private candidate file

Other users logged in will not make changes to this file

Private Files comitted are merged into active config

Whoever commits last wins if there are matching commands

Can’t commit until you are at the top of the configuration (in private mode)

configure exclusive

Locks config database

Can be killed by admin

No other user can edit config while you are in this mode

(edit) top

Goes back to the top of the configuration tree

Candidate Config Files

commit

Turns candidate config file into active

Warning will show if candidate config is already being edited

Commiting Configurations

Rollback files are last three Active configurations and stored in /config/(the current active are stored here as well)

4-49 are stores in /var/config/

Shows timestamp for the last time the file was active

rollback 1

Places rollback file one into the candidate config, must commit to make it active

CLI Help, Auto complete

Can type ? To show available commands

#> Show version brief

Show version info, hostname, and model

#>Configure

goes into configure mode

set system host-name hostname

set’s hostname

delete system host-name

deletes set hostname

edit routing-options static

edit routing options mode

exit

exit

Junos will let you know that config hasn’t been committed and ask if you want to commit

rollback 0

throwaway all changes to active candidate

#> help topic routing-options static

shows info page for topic specified

#> help references routing-options static

syntax and hierarchy of commands

Keyboard Shortcuts

Command completion

Space

auto complete commands built into system, Does not autocomplete things you named

tab

autocomplete user defined commands in the system

?

will show user defined options for autocomplete as well

Navigating Configuration Mode

When you go into config mode the running config is copied into a candidate file that you will be working on

show

if in configure mode, displays the entire candidate configuration

edit

similar to cd

edit protocols ospf

goes to the protocols/ospf heirarchy config mode

if you run show commend it will show the contents of hierarchy from wherever you are.

top

goes to the top of the hierarchy. Like cd to / in Linux

must be at the top to commit changes

show protocols ospf

selects which part of the hierarchy to show

will only see this if you are above the option you want to show in the hierarchy

can bypass this with:

top show routing-options static

same thing happens with the edit command

top edit routing-options

same fix

Editing, Renaming, and Comparing Configuration

up

moves up one level in the hierarchy

there is a protion in this video wioth vlan and interface configuration, come back if this isn’t covered elsewhere

up 2

jump up 2 levels

rollback ?

shows all the rollback files on the system

run show system uptime

run is like “do” in cisco, can run command from anywhere

rollback 1

rolls config back to rollback 1 file

show | compare

show things to be removed added with - or +

exit

Also brings you to the top of config file

Replace, Copy, or annotate Configuration

copy ge-0/0/1 to ge-0/0/2

makes a copy of the config

show ge-0/0/0

edit ge-0/0/0

Edit interfaces mode

#(int) replace pattern 0.101 with 200.102

Replaces the pattern of the ip address

#(int) replace pattern /24 with /25

Replace mask

If using replace commands don’t commit the config without running the #top show | compare command to verify. You may have run the compare command from one place.

top edit protocols ospf

Go into ospf edit

deactivate interface ge-0/0/0.0

Remove interface from ospf

annotate interface ge-0/0/0 “took down due to flapping”

C style programming comment

Load merge Configuration

run file list

ls -l basically

run file show top-int-config

Display contents of top-int-config

Paste Config on a Juniper Switch

cli
top
delete
configure
load set terminal 
ctrl+shift +D to exit
commit check
commit and-quit

Juniper command equivalent to Cisco commands

Basic CLI and Systems Management

Commands

clock set > set date
reload > request system reboot
show history > show cli history
show logging > show log messages | last show processes > show system processes
show running config > show configuration
show users > show system users
show version > show version | show chassis hardware
trace > traceroute

Switching Commands

show ethernet-switching interfaces
show spanning-tree > show spanning-tree bridge
show mac address-table > show ethernet-switching table

OSPF Commands

show ip ospf database > show ospf database
show ip ospf interface > show ospf interface
show ip ospf neighbor > show ospf neighbor

Routing Protocol-Independent Commands

clear arp-cache > clear arp
show arp > show arp
show ip route > show route
show ip route summary > show route summary
show route-map > show policy | policy-name
show tcp > show system connections

Interface Commands

clear counters > clear interface statistics
show interfaces > show interfaces
show interfaces detail > show interfaces extensive
show ip interface brief > show interfaces terse

My CCNA Notes

The formatting and images of all of my networking notes go destroyed when migrating away from OneNote. But they still come in handy all of the time.

See my networking notes:

Networking Network Devices and Network Connections

Hostname

  • “-”, “_ “, and “. " characters are allowed.
  • Up to 253 characters.
  • Stored in /etc/hostname.
  • Can be viewed with several different commands, such as hostname, hostnamectl, uname, and nmcli, as well as by displaying the content of the /etc/hostname file.

View the hostname:

hostnamectl --static
hostname
uname -n
cat /etc/hostname

Lab: Change the Hostname

Server1

  1. Open /etc/hostname and change the entry to server10.example.com
  2. Restart the systemd-hostnamed service daemon
sudo systemctl restart systemd-hostnamed
  1. confirm
hostname

server2

  1. Change the hostname with hostnamectl:
sudo hostnamectl set-hostname server21.example.com
  1. Log out and back in for the prompt to update

  2. Change the hostname using nmcli

nmcli general hostname server20.example.com

Hardware and IP Addressing

Ethernet Address

  • 48-bit address that is used to identify the correct destination node for data packets transmitted from the source node.
  • The data packets include hardware addresses for the source and the destination node.
  • Also referred to as the hardware, physical, link layer, or MAC address.

List all network interfaces with their ethernet addresses:

ip addr | grep ether

Subnetting

  • Network address space is divided into several smaller and more manageable logical subnetworks (subnets).
  • Benefits:
    • Reduced network traffic
    • Improved network performance
    • de-centralized and easier administration
    • uses the node bits only
  • Results in the reduction of usable addresses.
  • All nodes in a given subnet have the same subnet mask.
  • Each subnet acts as an isolated network and requires a router to talk to other subnets.
  • The first and the last IP address in a subnet are reserved. The first address points to the subnet itself, and the last address is the broadcast address.

IPv4

View current ipv4 address:

ip addr

Classful Network Addressing

See Classful ipv4

IPv6 Address

See ipv6

The ip addr command also shows IPv6 addresses for the interfaces:

[root@server200 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:b9:4e:ef brd ff:ff:ff:ff:ff:ff
    inet 172.16.1.155/20 brd 172.16.15.255 scope global dynamic noprefixroute enp0s3
       valid_lft 79061sec preferred_lft 79061sec
    inet6 fe80::a00:27ff:feb9:4eef/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever

Tools:

  • ping6
  • traceroute6
  • tracepath6

Protocols

  • Defined in /etc/protocols
  • Well known ports are defined in /etc/services

cat /etc/protocols

TCP and UDP Protocols

See IP Transport and Applications and tcp_ip_basic

ICMP

Send two pings to server 20

ping -c2 192.168.0.120

Ping the server’s loopback interface:

ping 127.0.0.1

Send a traceroute to server 20

traceroute 192.168.0.120

Or:

tracepath 192.168.0.120

ICMPv6

  • IPv6 version of ICMP
  • enabled by default

Ping and ipv6 address:

ping6 

Trace a route to an IPv6 address:

tracepath6
traceroute6

Show IPv6 addresses:

ip addr | grep inet6

Network Manager Service

Default service in RHEL for network:

  • interface and connection configuration.
  • Administration.
  • Monitoring.

NetworkManager daemon

  • Responsible for keeping interfaces and connection up and active.
  • Includes:
    • nmcli
    • nmtui (text-based)
    • nm-connection-editor (GUI)
  • Does not manage loopback interfaces.

Interface Connection Profiles

  • Configuration file on each interface that defines IP assignments and other relevant parameters for it.

  • The networking subsystem reads this file and applies the settings at the time the connection is activated.

  • Connection configuration files (or connection profiles) are stored in a central location under the /etc/NetworkManager/system-connections directory.

  • The filenames are identified by the interface connection names with nmconnection as the extension.

  • Some instances of connection profiles are: enp0s3.nmconnection, ens160.nmconnection, and em1.nmconnection.

    On server10 and server20, the device name for the first interface is enp0s3 with connection name enp0s3 and relevant connection information stored in the enp0s3.nmconnection file.

This connection was established at the time of RHEL installation. The current content of the file from server10 are presented below:

[root@server200 system-connections]# cat /etc/NetworkManager/system-connections/enp0s3.nmconnection 
[connection]
id=enp0s3
uuid=45d6a8ea-6bd7-38e0-8219-8c7a1b90afde
type=ethernet
autoconnect-priority=-999
interface-name=enp0s3
timestamp=1710367323

[ethernet]

[ipv4]
method=auto

[ipv6]
addr-gen-mode=eui64
method=auto

[proxy]
  • Each section defines a set of networking properties for the connection.

Directives

id

  • Any description given to this connection. The default matches the interface name.

uuid

  • The UUID associated with this connection

type

  • Specifies the type of this connection

autoconnect-priority

  • If the connection is set to autoconnect, connections with higher priority will be preferred. A higher number means higher priority. The range is between -999 and 999 with 0 being the default.

interface_name

  • Specifies the device name for the network interface

timestamp

  • The time, in seconds since the Unix Epoch that the connection was last activated successfully. This field is automatically populated each time the connection is activated.

address1/method

  • Specifies the static IP for the connection if the method property is set to manual. /24 represents the subnet mask.

addr-gen-mode/method

  • Generates an IPv6 address based on the hardware address of the interface.

View additional directives:

man nm-settings

Naming rules for devices are governed by udevd service based on:

  • Device location
  • Topology
  • setting in firmware
  • virtualization layer

Understanding Hosts Table

See DNS and Time Synchronization

/etc/hosts file

  • Table used to maintain hostname to IP mapping for systems on the local network, allowing us to access a system by simply employing its hostname.

Each row in the file contains an IP address in column 1 followed by the official (or canonical) hostname in column 2, and one or more optional aliases thereafter.

EXAM TIP: In the presence of an active DNS with all hostnames resolvable, there is no need to worry about updating the hosts file.

As expressed above, the use of the hosts file is common on small networks, and it should be updated on each individual system to reflect any changes for best inter-system connectivity experience.

Networking DIY Challenge Labs

Lab: Update Hosts Table and Test Connectivity.

  1. Add both server10 and server20’s interfaces to both server’s /etc/host files:
192.168.0.110  server10.example.com  server10 <-- This is an alias
192.168.0.120  server20.example.com  server20   
172.10.10.110  server10s8.example.com   server10s8
172.10.10.120  server20s8.example.com   server20s8
  1. Send 2 packets from server10 to server20’s IP address:
ping -c2 192.168.0.120
  1. Send 2 pings from server10 to server20’s hostname:
ping -c2 server20

Lab 15-1: Add New Interface and Configure Connection Profile with nmcli

  • Add a third network interface to rhel9server40 in VirtualBox.
  • As user1 with sudo on server40, run ip a and verify the addition of the new interface.
  • Use the nmcli command and assign IP 192.168.0.40/24 and gateway 192.168.0.1
[root@server40 ~]# nmcli c a type Ethernet ifname enp0s8 con-name enp0s8 ip4 192.168.0.40/24 gw4 192.168.0.1
  • Deactivate and reactivate this connection manually.
[root@server40 ~]# nmcli c d enp0s8
Connection 'enp0s8' successfully deactivated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/3)

[root@server40 ~]# nmcli c s
NAME    UUID                                  TYPE      DEVICE 
enp0s3  6e75a5e4-869b-3ed1-bdc4-c55d2d268285  ethernet  enp0s3 
lo      66809437-d3fa-4104-9777-7c3364b943a9  loopback  lo     
enp0s8  9a32e279-84c2-4bba-b5c5-82a04f40a7df  ethernet  --     
[root@server40 ~]# nmcli c u enp0s8
Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/4)

[root@server40 ~]# nmcli c s
NAME    UUID                                  TYPE      DEVICE 
enp0s3  6e75a5e4-869b-3ed1-bdc4-c55d2d268285  ethernet  enp0s3 
enp0s8  9a32e279-84c2-4bba-b5c5-82a04f40a7df  ethernet  enp0s8 
lo      66809437-d3fa-4104-9777-7c3364b943a9  loopback  lo   
  • Add entry server40 to server30’s hosts table.
[root@server30 ~]# vim /etc/hosts
[root@server30 ~]# ping server40
PING server40.example.com (192.168.0.40) 56(84) bytes of data.
64 bytes from server40.example.com (192.168.0.40): icmp_seq=1 ttl=64 time=3.20 ms
64 bytes from server40.example.com (192.168.0.40): icmp_seq=2 ttl=64 time=0.628 ms
64 bytes from server40.example.com (192.168.0.40): icmp_seq=3 ttl=64 time=0.717 ms
^C
--- server40.example.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2009ms
rtt min/avg/max/mdev = 0.628/1.516/3.204/1.193 ms

Lab: Add New Interface and Configure Connection Profile Manually (server30)

Add a third network interface to RHEL9server30 in VirtualBox.

run ip a and verify the addition of the new interface.

Use the nmcli command and assign IP 192.168.0.30/24 and gateway 192.168.0.1

nmcli c a type Ethernet ifname enp0s8 con-name enp0s8 ip4 192.168.0.30/24 gw4 192.168.0.1

Deactivate and reactivate this connection manually. Add entry server30 to the hosts table of server 40

[root@server30 system-connections]# nmcli c d enp0s8
Connection 'enp0s8' successfully deactivated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/3)

[root@server30 system-connections]# nmcli c u enp0s8
Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/4)

/etc/hosts

192.168.0.30 server30.example.com server30

ping tests to server30 from server 40

[root@server40 ~]# ping server30
PING server30.example.com (192.168.0.30) 56(84) bytes of data.
64 bytes from server30.example.com (192.168.0.30): icmp_seq=1 ttl=64 time=1.59 ms
64 bytes from server30.example.com (192.168.0.30): icmp_seq=2 ttl=64 time=0.474 ms
^C
--- server30.example.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.474/1.032/1.590/0.558 ms

Or create the profile manually and restart network manager:

[connection]
id=enp0s8
type=ethernet
interface-name=enp0s8
uuid=92db4c65-2f13-4952-b81f-2779b1d24a49

[ethernet]

[ipv4]
method=manual
address1=10.1.13.3/24,10.1.13.1

[ipv6]
addr-gen-mode=default
method=auto

[proxy]

Administration Tools

ip

  • Display monitor and manage network interfaces, routing, connections, traffic, etc.

ifup

  • Brings up an interface

ifdown

  • Brings down an interface

nmcli

  • Creates, updates, deletes, activates, and deactivates a connection profile.

nmcli command

  • Create, view, modify, remove, activate, and deactivate network connections.
  • Control and report network device status.
  • Supports abbreviation of commands.

Operates on 7 different object categories.

  1. general
  2. networking
  3. connection (c)(con)
  4. device (d)(dev)
  5. radio
  6. monitor
  7. agent
[root@server200 system-connections]# nmcli --help
Usage: nmcli [OPTIONS] OBJECT { COMMAND | help }

OPTIONS
  -a, --ask                                ask for missing parameters
  -c, --colors auto|yes|no                 whether to use colors in output
  -e, --escape yes|no                      escape columns separators in values
  -f, --fields <field,...>|all|common      specify fields to output
  -g, --get-values <field,...>|all|common  shortcut for -m tabular -t -f
  -h, --help                               print this help
  -m, --mode tabular|multiline             output mode
  -o, --overview                           overview mode
  -p, --pretty                             pretty output
  -s, --show-secrets                       allow displaying passwords
  -t, --terse                              terse output
  -v, --version                            show program version
  -w, --wait <seconds>                     set timeout waiting for finishing operations

OBJECT
  g[eneral]       NetworkManager\'s general status and operations
  n[etworking]    overall networking control
  r[adio]         NetworkManager radio switches
  c[onnection]    NetworkManager\'s connections
  d[evice]        devices managed by NetworkManager
  a[gent]         NetworkManager secret agent or polkit agent
  m[onitor]       monitor NetworkManager changes

3. connection

  • Activates, deactivates, and administers network connections.

Options:

  • show (list connections)
  • up/down (Brings connection up or down)
  • add(a) (adds a connection)
  • edit (edit connection or add a new one)
  • modify (modify properties of a connection)
  • delete(d) (delete a connection)
  • reload (re-read all connection profiles)
  • load (re-read a connection profile)

4. Device

Options:

  • status (Displays device status)
  • show (Displays info about device(s)

Show all connections, inactive or active:

nmcli c s

Deactivate the connection enp0s8:

sudo nmcli c down enp0s8

Note:

The connection profile gets detached from the device, disabling the connection.

Activate the connection enp0s8:

$ sudo nmcli c up enp0s8
# connection profile re-attaches to the device.

Display the status of all network devices:

nmcli d s

Lab: Add Network Devices to server10 and one to server20 using VirtualBox

  1. Shut down your servers (follow each step for both servers)
sudo shutdown now
  1. Add network interface in Virtualbox then power on the VMs
Select machine > settings > Network > Adapter 2 > Enable Network Adapter > Internal Network > ok
  1. Verify the new interfaces:
ip a

Lab: Configure New Network Connection Using nmcli (server20)

  1. Verify the interface that was added from virtualbox:
nmcli d status | grep enp
  1. Add connection profile and attach it to the interface:
sudo nmcli c a type Ethernet ifname enp0s8 con-name enp0s8 ip4 172.10.10.120/24 gw4 172.10.10.1
  1. Confirm connection status
nmcli d status | grep enp
  1. Verify ip address
ip a
  1. Check the content of the connection profile
cat /etc/NetworkManager/system-connections/enp0s8.nmconnection

Resources for Passing CCNA

There are a lot of great CCNA resources out there. This list does not include all of them. Only the ones that I personally used to pass the CCNA 200-301 exam.

Materials for CCNA are generally separated into 5 categories:

  • Books
  • Video courses
  • Labs
  • Practice test
  • Flashcards

Books

Wendell Odom OCG Official cert guide library

To me, this is the king of CCNA study materials. Some people do not like reading but this will give you more depth than any other resource on this list. Link.

Todd Lammle Books

Yes, I read both the OCG and Todd Lammle books cover to cover. No, I do not recommend doing this. Todd has a great way of adding humor into networking. If you need to build up your networking from the ground up. These books are great. Link.

Video Courses

CBT Nuggets

Jeremy Ciara makes learning networking so much fun. This was a great course but is not enough for you to pass the exam on it’s own. Also, a CBT nuggets monthly subscription will set you back $59 per month. Link.

Jeremy’s IT Lab

Jermey’s IT lab course was the most informative for me. Jeremy is really great at explaining the more complex topics. Jeremy’s course also includes Packet Tracer labs and and in depth Anki flashcard deck for free. Link.

Labs

David Bombal’s Packet Tracer Labs

These labs will really make you think. Although they do steer off the exam objectives a bit. Link.

Jeremy’s IT labs

These were my favorite labs by far. Very easy to set up with clear instructions and video explanations. Link.

Practice test

Boson Exsim

I can’t stress this enough. if there is one resource that you invest some money into. it’s the Boson practice exams. This is a test simulator that is very close to what the actual test will be like. Exsim comes with 3 exams.

After taking one of these practice tests you will get a breakdown of your scores per category. You will also get to go through all of your questions and see detailed explantations for why each answer is right or wrong.

These practice exams were crucial for me to understand where my knowledge gaps were. Link.

Subnettingpractice.com

You can learn subnetting pretty good. Then forget some of the steps a month later and have to learn all over again. It was very helpful to go over some of these subnetting questions once in a while. Link.

Flashcards

Anki Deck

These are the only flashcards I used. It is very nice not to have to create your own flashcards. Having the Anki app on your phone is very convenient. You can study whenever you have a few minutes of downtime.

Anki also used spaced-repetition. It will give you harder flashcards more often based on how you rate their difficulty.

This particular deck goes along with the OCG. You can filter by chapter and add more as you get through the book.

I will be using Anki flashcards for every exam in the future. Link.

My Top 3

Be careful not to use too many resources. You may get a bit overwhelmed. Especially if this is your first certification like it was for me. You will be building study habits and learning how to read questions correctly. So focus on quality over quantity.

If I had to study for the CCNA again, I would use these three resources:

  • OCG
  • Boson Exsim
  • Anki Flashcards

If you like these posts, please let me know so i can keep making more like them!

Time Synchronization

Network Time Protocol (NTP)

  • Networking protocol for synchronizing the system clock with remote time servers for accuracy and reliability.
  • Having steady and exact time on networked systems allows time-sensitive applications, such as authentication and email applications, backup and scheduling tools, financial and billing systems, logging and monitoring software, and file and storage sharing protocols, to function with precision.
  • Sends a stream of messages to configured time servers and binds itself to the one with least amount of delay in its responses, the most accurate, and may or may not be the closest distance-wise.
  • Client system maintains a drift in time in a file and references this file for gradual drop in inaccuracy.

Chrony

  • RHEL 9 implementation of NTP
  • Uses the UDP port 123.
  • If enabled, it starts at system boot and continuously operates to keep the system clock in sync with a more accurate source of time.
  • Performs well on computers that are occasionally connected to the network, attached to busy networks, do not run all the time, or have variations in temperature.

Chrony is the RHEL implementation of NTP. And it operates on UDP port 123. If you enable it, it starts at system boot and continuously monitors system time and keeps in in sync.

Time Sources

  • A time source is any reference device that acts as a provider of time to other devices.
  • The most precise sources of time are the atomic clocks.
  • They use Universal Time Coordinated (UTC) for time accuracy.
  • They produce radio signals that radio clocks use for time propagation to computer servers and other devices that require correctness in time.
  • When choosing a time source for a network, preference should be given to the one that takes the least amount of time to respond.
  • This server may or may not be closest physically.

The common sources of time employed on computer networks are:

  • The local system clock
  • Internet-based public time server
  • Radio clock.

local system clock

  • Can be used as a provider of time.
  • This requires the maintenance of correct time on the server either manually or automatically via cron.
  • Keep in mind that this server has no way of synchronizing itself with a more reliable and precise external time source.
  • Using the local system clock as a time server is the least recommended option.

Public time server

  • Several public time servers are available over the Internet for general use (visit www.ntp.org for a list).
  • These servers are typically operated by government agencies, research and scientific organizations, large software vendors, and universities around the world.
  • One of the systems on the local network is identified and configured to receive time from one or more public time servers.
  • Preferred over the use of the local system clock.

The official ntp.org site also provides a common pool called pool.ntp.org for vendors and organizations to register their own NTP servers voluntarily for public use. Examples:

  • rhel.pool.ntp.org and ubuntu.pool.ntp.org for distribution-specific pools,
  • ca.pool.ntp.org and oceania.pool.ntp.org for country and continent/region-specific pools.

Under these sub-pools, the owners maintain multiple time servers with enumerated hostnames such as 0.rhel.pool.ntp.org, 1.rhel.pool.ntp.org, 2.rhel.pool.ntp.org, and so on.

Radio clock

  • Regarded as the perfect provider of time
  • Receives time updates straight from an atomic clock.
  • Global Positioning System (GPS), WWVB, and DCF77 are some popular radio clock methods.
  • A direct use of signals from these sources requires connectivity of some hardware to the computer identified to act as an organizational or site-wide time server.

NTP Roles

  • A system can be configured to operate as a primary server, secondary server, peer, or client.

Primary server

  • Gets time from a time source and provides time to secondary servers or directly to clients.

secondary server

  • Receives time from a primary server and can be configured to furnish time to a set of clients to offload the primary or for redundancy.
  • The presence of a secondary server on the network is optional but highly recommended.

peer

  • Reciprocates time with an NTP server.
  • All peers work at the same stratum level, and all of them are considered equally reliable.

client

  • Receives time from a primary or a secondary server and adjusts its clock accordingly.

Stratum Levels

  • Time sources are categorized hierarchically into several levels that are referred to as stratum levels based on their distance from the reference clocks (atomic, radio, and GPS).

  • The reference clocks operate at stratum level 0 and are the most accurate provider of time with little to no delay.

  • Besides stratum 0, there are fifteen additional levels that range from 1 to 15.

  • Of these, servers operating at stratum 1 are considered perfect, as they get time updates directly from a stratum 0 device.

  • A stratum 0 device cannot be used on the network directly. It is attached to a computer, which is then configured to operate at stratum 1.

  • Servers functioning at stratum 1 are called time servers and they can be set up to deliver time to stratum 2 servers.

  • Similarly, a stratum 3 server can be configured to synchronize its time with a stratum 2 server and deliver time to the next lower-level servers, and so on.

  • Servers sharing the same stratum can be configured as peers to exchange time updates with one another.

There are numerous public NTP servers available for free that synchronize time. They normally operate at higher stratum levels such as 2 and 3.

Chrony Configuration File

/etc/chrony.conf

  • key configuration file for the Chrony service
  • Referenced by the Chrony daemon at startup to determine the sources to synchronize the clock, the log file location, and other details.
  • Can be modified by hand to set or alter directives as required.
  • Common directives used in this file along with real or mock values:

driftfile

  • /var/lib/chrony/drift
  • Indicates the location and name of the drift file to be used to record the rate at which the system clockgains or losses time. This data is used by Chrony to maintain local system clock accuracy.

logdir

  • /var/log/chrony
  • Sets the directory location to store the log files in

pool

  • 0.rhel.pool.ntp.org iburst
  • Defines the hostname that represents a pool of time servers. Chrony binds itself with one of the servers to get updates. In case of a failure of that server, it automatically switches the binding to another server within the pool.
  • The iburst option dictates the Chrony service to send the first four update requests to the time server every 2 seconds. This allows the daemon to quickly bring the local clock closer to the time server at startup.

server

  • server20s8.example.com iburst
  • Defines the hostname or IP address of a single time server.

server

  • 127.127.1.0
  • The IP 127.127.1.0 is a special address that epitomizes the local system clock.

peer

  • prodntp1.abc.net
  • Identifies the hostname or IP address of a time server running at the same stratum level. A peer provides time to a server as well as receives time from the same server

man chrony.conf for details.

Chrony Daemon and Command

  • Chrony service runs as a daemon program called chronyd that handles time synchronization in the background.
  • Uses /etc/chrony.conf file at startup and sets its behavior accordingly.
  • If the local clock requires a time adjustment, Chrony takes multiple small steps toward minimizing the gap rather than doing it abruptly in a single step.

The Chrony service has a command line program called chronyc.

chronyc command

  • monitor the performance of the service and control its runtime behavior. Subcommands:

sources

  • List current sources of time

tracking

  • view performance statistics

Lab: Configure NTP Client (server10)

  • Install the Chrony software package and activate the service without making any changes to the default configuration.
  • Validate the binding and operation.

1. Install the Chrony package using the dnf command:

[root@server10 ~]# sudo dnf -y install chrony

2. Ensure that preconfigured public time server entries are present in the /etc/chrony.conf file:

[root@server1 ~]# grep -E 'pool|server' /etc/chrony.conf | grep -v ^#
pool 2.rhel.pool.ntp.org iburst

There is a single pool entry set in the file by default. This pool name is backed by multiple NTP servers behind the scenes.

3. Start the Chrony service and set it to autostart at reboots: sudo systemctl --now enable chronyd

4. Examine the operational status of Chrony: sudo systemctl status chronyd --no-pager -l

5. Inspect the binding status using the sources subcommand with chronyc:

[root@server1 ~]# chronyc sources
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^+ ntp7-2.mattnordhoffdns.n>     2   8   377   324  -3641us[-3641us] +/-   53ms
^* 2600:1700:4e60:b983::123      1   8   377   430   +581us[  +84us] +/-   36ms
^- 2600:1700:5a0f:ee00::314>     2   8   377    58  -1226us[-1226us] +/-   50ms
^- 2603:c020:6:b900:ed2f:b4>     2   9   377   320   +142us[ +142us] +/-   73ms

^ means the source is a server * implies current association with the source.

Poll

  • polling rate (6 means 64 seconds), Reach
  • reachability register (377 indicates a valid response was received), Last sample
  • how long ago the last sample was received, and the offset between the local clock and the source at the last measurement

6. Display the clock performance using the tracking subcommand with chronyc:

[root@server1 ~]# chronyc tracking
Reference ID    : 2EA39303 (2600:1700:4e60:b983::123)
Stratum         : 2
Ref time (UTC)  : Sun Jun 16 12:05:45 2024
System time     : 286930.187500000 seconds slow of NTP time
Last offset     : -0.000297195 seconds
RMS offset      : 2486.306152344 seconds
Frequency       : 3.435 ppm slow
Residual freq   : -0.034 ppm
Skew            : 0.998 ppm
Root delay      : 0.064471066 seconds
Root dispersion : 0.003769779 seconds
Update interval : 517.9 seconds
Leap status     : Normal

EXAM TIP: You will not have access to the outside network during the exam. You will need to point your system to an NTP server available on the exam network. Simply comment the default server/pool directive(s) and add a single directive “server <hostname>” to the file. Replace <hostname> with the NTP server name or its IP address as provided.

timedatectl command.

  • Modify the date, time, and time zone.
  • Outputs the local time, Universal time, RTC time (real-time clock, a battery-backed hardware clock located on the system board), time zone, and the status of the NTP service by default:
[root@server10 ~]# timedatectl
               Local time: Mon 2024-07-22 10:55:11 MST
           Universal time: Mon 2024-07-22 17:55:11 UTC
                 RTC time: Mon 2024-07-22 17:55:10
                Time zone: America/Phoenix (MST, -0700)
System clock synchronized: yes
              NTP service: active
          RTC in local TZ: no
  • Requires that the NTP/Chrony service is deactivated in order to make time adjustments.

Turn off NTP and verify:

[root@server10 ~]# timedatectl set-ntp false
[root@server10 ~]# timedatectl | grep NTP
              NTP service: inactive

Modify the current date and confirm:

[root@server10 ~]# timedatectl set-time 2024-07-22

[root@server10 ~]# timedatectl
               Local time: Mon 2024-07-22 00:00:30 MST
           Universal time: Mon 2024-07-22 07:00:30 UTC
                 RTC time: Mon 2024-07-22 07:00:30
                Time zone: America/Phoenix (MST, -0700)
System clock synchronized: no
              NTP service: inactive
          RTC in local TZ: no

Change both date and time in one go:

[root@server10 ~]# timedatectl set-time "2024-07-22 11:00"

[root@server10 ~]# timedatectl
               Local time: Mon 2024-07-22 11:00:06 MST
           Universal time: Mon 2024-07-22 18:00:06 UTC
                 RTC time: Mon 2024-07-22 18:00:06
                Time zone: America/Phoenix (MST, -0700)
System clock synchronized: no
              NTP service: inactive
          RTC in local TZ: no

Reactivate NTP:

[root@server10 ~]# timedatectl set-ntp true

[root@server10 ~]# timedatectl | grep NTP
              NTP service: active

date command

  • view or modify the system date and time.

View current date and time:

[root@server10 ~]# date
Mon Jul 22 11:03:00 AM MST 2024

Change the date and time:

[root@server10 ~]# date --set "2024-07-22 11:05"
Mon Jul 22 11:05:00 AM MST 2024

Return the system to the current date and time:

[root@server10 ~]# timedatectl set-ntp false
[root@server10 ~]# timedatectl set-ntp true

DNS and Time Sync DIY Labs

Lab: Configure Chrony

  • Install Chrony and mark the service for autostart on reboots. systemctl enable --now chronyd

  • Edit the Chrony configuration file and comment all line entries that begin with “pool” or “server”.

[root@server10 ~]# vim /etc/chrony.conf
  • Go to the end of the file, and add a new line “server 127.127.1.0”.

  • Start the Chrony service and run chronyc sources to confirm the binding.

[root@server10 ~]# systemctl restart chronyd

[root@server10 ~]# chronyc sources
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^? 127.127.1.0                   0   6     0     -     +0ns[   +0ns] +/-    0ns

Lab: Modify System Date and Time

  • Execute the date and timedatectl commands to check the current system date and time.
[root@server10 ~]# date
Mon Jul 22 11:37:54 AM MST 2024

[root@server10 ~]# timedatectl
               Local time: Mon 2024-07-22 11:37:59 MST
           Universal time: Mon 2024-07-22 18:37:59 UTC
                 RTC time: Mon 2024-07-22 18:37:59
                Time zone: America/Phoenix (MST, -0700)
System clock synchronized: no
              NTP service: active
          RTC in local TZ: no
  • Identify the distinctions between the two outputs.

  • Use timedatectl and change the system date to a future date.

[root@server10 ~]# timedatectl set-time 2024-07-23
Failed to set time: Automatic time synchronization is enabled

[root@server10 ~]# timedatectl set-ntp false

[root@server10 ~]# timedatectl set-time "2024-07-23"
  • Issue the date command and change the system time to one hour ahead of the current time.
[root@server10 ~]# date -s "2024-07-22 12:41"
Mon Jul 22 12:41:00 PM MST 2024
  • Observe the new date and time with both commands.
[root@server10 ~]# date -s "2024-07-22 12:41"
Mon Jul 22 12:41:00 PM MST 2024

[root@server10 ~]# date
Mon Jul 22 12:41:39 PM MST 2024
[root@server10 ~]# timedatectl
               Local time: Mon 2024-07-22 12:41:41 MST
           Universal time: Mon 2024-07-22 19:41:41 UTC
                 RTC time: Tue 2024-07-23 07:01:41
                Time zone: America/Phoenix (MST, -0700)
System clock synchronized: no
              NTP service: inactive
          RTC in local TZ: no
  • Reset the date and time to the current actual time by disabling and re-enabling the NTP service using the timedatectl command.
[root@server10 ~]# timedatectl set-ntp true

Toggle PoE on a Juniper Switch

Configure

set poe interface ge-0/0/0 disable

commit

rollback 1

commit

What to Learn After CCNA

It’s easy to get overwhelmed with options after completing your CCNA. What do you learn next? If you are trying to get a job as a Network Engineer, you will want to check this out.

I went through dozens of job listings that mentioned CCNA. Then, tallied up the main devices/vendors, certifications, and technologies mentioned. And left out anything that wasn’t mentioned more than twice.

Core CCNA technologies such as LAN, WAN, OSPF, Spanning Tree, VLANs, etc. have been left out. The point here is to target the most sought after technologies and skills by employers. I also left out soft skills and any job that wasn’t a networking specific role.

Devices/ Vendors

Palo Alto is huge! I’m not suprised by this. Depending on the company, a network engineer may be responsible for firewall configuration and troubleshooting. It also looks like Network Engineers with a wide variety of skills are saught after.

Device/Vendor Times Mentioned
Palo Alto 9
Cisco ASA 6
Juniper 6
Office 365 5
Meraki 4
Vmware 4
Linux 4
Ansible 4
AWS 3
Wireshark 3

Technologies

Firewall comes in first again. Followed closely by VPN skills. Every interview I had for a Network Engineer position asked if I knew how to configure and troubleshoot VPNs.

Technology Times Mentioned
Firewall 19
VPN 16
Wireless 12
BGP 12
Security 12
MPLS 10
Load balancers 8
Ipsec 7
ISE 6
DNS 5
SDWAN 5
Cloud 4
TACACS+ 4
ACL 4
SIEM 4
IDS/IPS 4
RADIUS 3
ITIL 3
Ipam 3
VOIP 3
EIGRP 3
Python 3

Certifications

CCNP blew every other cert out of the water. Companies will be very interested if you are working towards this cert. Security + comes highly recommended as well.

Certification Times Mentioned
CCNP 18
Security+ 6
JNCIA 4
JNCIP 4
Network + 4
CCIE 4
PCNSA 3

So what do you do after CCNA?

It depends…

Are you trying to get a new job ASAP? Are there opportunities at your current role that you can use your new skills to leverage? Do you have some study time before you are ready to take the next step?

CCNP Enterprise is a good bet if you really want to stand out in Network Engineering interviews.

Don’t want to be a Network engineer?

Continue to build a good base of IT skills. This will open you up to a larger variety of jobs and open skill paths that you need a good foundation to unlock.

Core skills include:

  • Linux/ Operating systems
  • Networking
  • General Cybersecurity
  • Programming/ Scripting

A good Linux certification like the RHCSA would be great to learn more about Linux, scripting, and operating systems. Security + would be good if you want to get a solid foundation of cyber security. And Python skills will give you a gold star in any IT interview.

Don’t get paralyzed by choices.

Pick something that interests you and go for it. That is the only way to get it right. Doing what you enjoy is better than not doing anything at all because you can’t decide the best path.

Hopefully we can revisit this post after learning Python to get a much bigger sample size.

Subsections of Packages

Advanced Package Management

Package Groups

package group

  • Group of packages that serve a common purpose.
  • Can query, install, and delete as a single unit rather than dealing with packages individually.
  • Two types of package groups: environment groups and package groups.

environment groups available in RHEL 9:

  • server, server with GUI, minimal install, workstation, virtualization host, and custom operating system.
  • Listed on the software selection window during RHEL 9 installation.

Package groups include:

  • container management, smart card support, security tools, system tools, network servers, etc.

Individual packages, package groups, and modules:

Individual Package Management

List, install, query, and remove packages.

Listing Available and Installed Packages

  • dnf lists available packages as well as installed packages.

Lab: list all packages available for installation from all enabled repos,

 sudo dnf repoquery

Lab: list of packages that are available only from a specific repo:

 sudo dnf repoquery --repo "BaseOS"

For example, to find whether the BaseOS repo includes the zsh package.

 sudo dnf repoquery --repo BaseOS | grep zsh

Lab: list all installed packages on the system:

 sudo dnf list installed

Three columns: - package name - package version - repo it was installed from. - @anaconda means the package was installed at the time of RHEL installation.

List all installed packages and all packages available for installation from all enabled repositories:

 sudo dnf list
  • @ sign identifies the package as installed.

List all packages available from all enabled repositories that should be able to update:

 sudo dnf list updates

List whether a package (bc, for instance) is installed or available for installation from any enabled repository:

 sudo dnf list bc

List all installed packages whose names begin with the string “gnome” followed by any number of characters:

 sudo dnf list installed ^gnome*

List recently added packages:

 sudo dnf list recent

Refer to the repoquery and list subsections of the dnf command manual pages for more options and examples.

Installing and Updating Packages

Installing a package:

  • creates the necessary directory structure
  • installs the required files
  • runs any post-installation steps.
  • If already installed, dnf command updates it to the latest available version.

Attempt to install a package called ypbind, proceed to update if it detects the presence of an older version:

 sudo dnf install ypbind

Install or update a package called dcraw located locally at /mnt/AppStream/Packages/

 sudo dnf localinstall /mnt/AppStream/Packages/dcraw*

Update an installed package (autofs, for example) to the latest available version. Dnf will fail if the specified package is not already installed:

 sudo dnf update autofs

Update all installed packages to the latest available versions:

 sudo dnf -y update

Refer to the install and update subsections of the dnf command manual pages for more options and examples.

Exhibiting Package Information

Show:

  • release
  • size
  • whether it is installed or available for installation
  • repo name it was installed or is available from
  • short and long descriptions
  • license
  • so on

dnf info subcommand

View information about a package called autofs:

 dnf info autofs
  • Determines whether the specified package is installed or not.

Refer to the info subsection of the dnf command manual pages.

Removing Packages

Removing a package:

  • uninstalls it and removes all associated files and directory structure.
  • erases any dependencies as part of the deletion process.

Remove a package called ypbind:

 sudo dnf remove ypbind

Output

  • Resolved dependencies
  • List of the packages that it would remove.
  • Disk space that their removal would free up.
  • After confirmation, it erased the identified packages and verified their removal.
  • List of the removed packages

Refer to the remove subsection of the dnf command manual pages for more options and examples available for removing packages.

Lab: Manipulate Individual Packages

Perform management operations on a package called cifs-utils. Determine if this package is already installed and if it is available for installation. Display its information before installing it. Install the package and exhibit its information. Erase the package along with its dependencies and confirm the removal.

  1. Check whether the cifs-utils package is already installed:
 dnf list installed | grep cifs-utils
  1. Determine if the cifs-utils package is available for installation:
 dnf repoquery cifs-utils
  1. Display detailed information about the package:
 dnf info cifs-utils
  1. Install the package:
 dnf install -y cifs-utils
  1. Display the package information again:
 dnf info cifs-utils
  1. Remove the package:
 dnf remove -y cifs-utils
  1. Confirm the removal:
 dnf list installed | grep cif

Determining Provider and Searching Package Metadata

  • You can determine what package a specific file belongs to or which package comprises a certain string.

Search for packages that contain a specific file such as /etc/passwd/, use the provides or the whatprovides subcommand with dnf:

 dnf provides /etc/passwd
  • Indicates file is part of a package called setup, installed during RHEL installation.

  • Second instance, setup package is part of the BaseOS repository.

  • Can also use a wildcard character for filename expansion.

List all packages that contain filenames beginning with “system-config” followed by any number of characters:

 dnf whatprovides /usr/bin/system-config*

To search for all the packages that match the specified string in their name or summary:

 dnf search system-config

Package Group Management

  • group subcommand
  • list, install, query, and remove groups of packages.

Listing Available and Installed Package Groups

group list subcommand:

  • list the package groups available for installation from either or both repos
  • list the package groups that are already installed on the system.

List all available and installed package groups from all repositories:

 dnf group list

output:

  • two categories of package groups:
    • Environment group
    • Package groups

Environment group:

  • Larger collection of RHEL packages that provides all necessary software to build the operating system foundation for a desired purpose.

Package group

  • Small bunch of RHEL packages that serve a common purpose.
  • Saves time on the deployment of individual and dependent packages.
  • Output shows installed and available package groups.

Display the number of installed and available package groups:

 sudo dnf group summary

List all installed and available package groups including those that are hidden:

 sudo dnf group list hidden

Try group list with --installed and --available options to narrow down the output list.

 sudo dnf group list --installed

List all packages that a specific package group such as Base contains:

 sudo dnf group info Base

-v option with the group info subcommand for more information.

Review group list and group info subsections of the dnf man pages.

Installing and Updating Package Groups

  • Creates the necessary directory structure for all the packages included in the group and all dependent packages.
  • Installs the required files.
  • Runs any post-installation steps.
  • Attempts to update all the packages included in the group to the latest available versions.

Install a package group called Emacs. Update if it detects an older version.

 sudo dnf -y groupinstall emacs

Update the smart card support package group to the latest version:

 dnf groupupdate "Smart Card Support"

Refer to the group install and group update subsections of the dnf command manual pages for more details.

Removing Package Groups

  • Uninstalls all the included packages and deletes all associated files and directory structure.
  • Erases any dependencies

Erase the smart card support package group that was installed:

 sudo dnf -y groupremove 'smart card support'

Refer to the remove subsection of the dnf command manual pages for more details.

Lab: Manipulate Package Groups

Perform management operations on a package group called system tools. Determine if this group is already installed and if it is available for installation. List the packages it contains and install it. Remove the group along with its dependencies and confirm the removal.

  1. Check whether the system tools package group is already installed:
 dnf group list installed
  1. Determine if the system tools group is available for installation:
 dnf group list available

The group name is exhibited at the bottom of the list under the available groups.

  1. Display the list of packages this group contains:
 dnf group info 'system tools'
  • All of the packages will be installed as part of the group installation.
  1. Install the group:
 sudo dnf group install 'system tools'
  1. Remove the group:
 sudo dnf group remove 'system tools' -y
  1. Confirm the removal:
 dnf group list installed

Application Streams and Modules

Application Streams

  • Introduced in RHEL 8.
  • Employs a modular approach to organize multiple versions of a software application alongside its dependencies to be available for installation from a single repository.

module

  • Logical set of application packages that includes everything required to install it, including the executables, libraries, documentation, tools, and utilities as well as dependent components.
  • Modularity gives the flexibility to choose the version of software based on need.
  • In older RHEL releases, each version of a package would have to come from a separate repository. (This has changed in RHEL 8.)
  • Now modules of a single application with different versions can be stored and made available for installation from a common repository.
  • The package management tool has also been enhanced to manipulate modules.
  • RHEL 9 is shipped with two core repositories called BaseOS and Application Stream (AppStream).

BaseOS repository

  • Includes the core set of RHEL 9 components
  • kernel, modules, bootloader, and other foundational software packages.
  • Lays the foundation to install and run software applications and programs.
  • Available in the traditional rpm format.

AppStream repository

  • Comes standard with core applications,
  • Plus several add-on applications
  • Rpm and modular format
  • Include web server software, development languages, database software, etc.

Benefits of Segregation

Why separate BaseOS components from other applications?

(1) Separates application components from the core operating system elements.
(2) Allows publishers to deliver and administrators to apply application updates more frequently.

In previous RHEL versions, an OS update would update all installed components including the kernel, service, and application components to the latest versions by default.

This could result in an unstable system or a misbehaving application due to an unwanted upgrade of one or more packages.

By detaching the base OS components from the applications, either of the two can be updated independent of the other.

This provides enhanced flexibility in tailoring the system components and application workloads without impacting the underlying stability of the system.

Module Streams

  • Collection of packages organized by version
  • Each module can have multiple streams
  • Each stream receives updates independent of the other streams
  • Stream can be enabled or disabled.

enabled stream

  • Allows the packages it contains to be queried or installed
  • Only one stream of a specific module can be enabled at a time
  • Each module has a default stream, which provides the latest or the recommended version.

Module Profiles

  • List of recommended packages organized for purpose-built, convenient deployments to support a variety of use cases such as:
  • Minimal, development, common, client, server, etc.
  • A profile may also include packages from the BaseOS repository or the dependencies of the stream
  • Each module stream can have zero, one, or more profiles associated with it with only one of them marked as the default.

Module Management

Modules are special package groups usually representing an application, a language runtime, or a set of tools. They are available in one or multiple streams which usually represent a major version of a piece of software, They are available in one or multiple streams which give you an option to choose what versions of packages you want to consume. https://docs.fedoraproject.org/en-US/modularity/using-modules/

Modules are a way to deliver different versions of software (such as programming languages, databases, or web servers) independently of the base operating system’s release cycle.

Each module can contain multiple streams, representing different versions or configurations of the software. For example, a module for Python might have streams for Python 2 and Python 3.

module dnf subcommand

  • list, enable, install, query, remove, and disable modules.

Listing Available and Installed Modules

List all modules along with their stream, profile, and summary information available from all configured repos:

 dnf module list

Limit the output to a list of modules available from a specific repo such as AppStream by adding --repo AppStream:

 dnf module list --repo AppStream

Output:

  • default (d)
  • enabled (e)
  • disabled (x)
  • installed (i)

List all the streams for a specific module such as ruby and display their status:

 dnf module list ruby

Modify the above and list only the specified stream 3.3 for the module ruby

 dnf module list ruby:3.3

List all enabled module streams:

 dnf module list --enabled

Similarly, you can use the --installed and --disabled options with dnf module list to output only the installed or the disabled streams.

Refer to the module list subsection of the dnf command manual pages.

Installing and Updating Modules

Installing a module

  • Creates directory tree for all packages included in the module and all dependent packages.
  • Installs required files for the selected profile.
  • Runs any post-installation steps.
  • If module being loaded or a part of it is already present, the command attempts to update all the packages included in the profile to the latest available versions.

Install the perl module using its default stream and default profile:

 sudo dnf -y module install perl

Update a module called squid to the latest version:

 sudo dnf module update squid -y

Install the profile “common” with stream “rhel9” for the container-tools module: (module:stream/profile)

 sudo dnf module install container-tools:rhel9/common

Displaying Module Information

  • Shows
    • Name, stream, version, list of profiles, default profile, repo name module was installed or is available from
    • Summary, description, and artifacts.
  • Can be viewed by supplying module info with dnf.

List all profiles available for the module ruby:

 dnf module info --profile ruby

Limit the output to a particular stream such as 3.1:

 dnf module info --profile ruby:3.1

Refer to the module info subsection of the dnf command manual pages for more details.

Removing Modules

Removing a module will:

  • Uninstall all the included packages and
  • Delete all associated files and directory structure.
  • Erases any dependencies as part of the deletion process.

Remove the ruby module with “3.1” stream:

 sudo dnf module remove ruby:3.1

Refer to the module remove subsection of the dnf command manual pages:

Lab: Manipulate Modules

  • Perform management operations on a module called postgresql.
  • Determine if this module is already installed and if it is available for installation.
  • Show its information and install the default profile for stream “10”.
  • Remove the module profile along with any dependencies
  • confirm the removal.
  1. Check whether the postgresql module is already installed(i):
 dnf module list postgresql
  1. Display detailed information about the default stream of the module:
 dnf module info postgresql:15
  1. Install the module with default profile for stream “15”:
 sudo dnf -y module install --profile postgresql:15
  1. Display the module information again:
 dnf module info postgresql:15
  1. Erase the module profile for the stream:
 dnf module remove -y postgresql:15
  1. Confirm the removal (back to (d)):
 dnf module info postgresql:15

Switching Module Streams

  • Typically performed to upgrade or downgrade the version of an installed module.

process:

  • uninstall the existing version provided by a stream alongside any dependencies that it has,

  • switch to the other stream

  • install the desired version.

  • Installing a module from a stream automatically enables the stream if it was previously disabled

  • you can manually enable or disable it with the dnf command.

  • Only one stream of a given module enabled at a time.

  • Attempting to enable another one for the same module automatically disables the current enabled stream.

  • dnf module list and dnf module info expose the enable/disable status of the module stream.

Lab: Install a Module from an Alternative Stream

  • Downgrade a module to a lower version.
  • Remove the stream ruby 3.3 and
  • Confirm its removal.
  • manually enable the stream perl 5.24 and confirm its new status.
  • install the new version of the module and display its information.
  1. Check the current state of all perl streams:
 dnf module list perl
  1. Remove perl 5.26:
 sudo dnf module remove perl -y
  1. Confirm the removal:
 dnf module list ruby
  1. Reset the module so that neither stream is enabled or disabled. This will remove the enabled (e) indication from ruby 3.3
 sudo dnf module reset ruby
  1. Install the non-default profile “minimal” for ruby stream 3.1. This will auto-enable the stream.

–allowerasing

  • Will instruct the command to remove installed packages for dependency resolution.
 sudo dnf module install ruby:3.1 --allowerasing
  1. Check the status of the module:
 dnf module list perl

The dnf Command

  • Introduced in RHEL 8
  • Can use interchangeably with yum in RHEL
    • yum is a soft link to the dnf utility.
  • Requires the system to have access to either:
    • a local or remote software repository
    • a local installable package file.

Subscription Management* (RHSM) service

  • Available in the Red Hat Customer Portal

  • Offers access to official Red Hat software repositories.

  • Other web-based repositories that host packages are available

  • You can also set up a local, custom repository on your system and add packages of your choice to it.

Primary benefit of using dnf over rpm:

  • Resolve dependencies automatically

    • By identifying and installing any additional required packages
  • With multiple repositories set up, dnf extracts the software from wherever it finds it.

  • Perform abundant software administration tasks.

  • Invokes the rpm utility in the background

  • Can perform a number of operations on individual packages, package groups, and modules:

    • listing
    • querying
    • installing
    • removing
    • enabling and disabling specific module streams.

Software handling tasks that dnf can perform on packages:

  • Clean and repolist are specific to repositories.
  • Refer to the manual pages of dnf for additional subcommands, operators, options, examples, and other details.
Subcommand Description
check-update Checks if updates are available for installed packages
clean Removes cached data
history Display previous dnf activities as recorded in /var/lib/dnf/history/
info Show details for a package
install Install or update a package
list List installed and available packages
provides Search for packages that contain the specified file or feature
reinstall Reinstall the exact version of an installed package
remove Remove a package and its dependencies
repolist List enabled repositories
repoquery Runs queries on available packages
search Searches package metadata for the specified string
upgrade Updates each installed package to the latest version

dnf subcommands that are intended for operations on package groups and modules:

Subcommand Description
group install Install or updates a package group
group info Return details for a package group
group list List available package groups
group remove Remove a package group
module disable Disable a module along with all the streams it contains
module enable Enable a module along with all the streams it contains
module install Install a module profile including its packages
module info Show details for a module
module list Lists all available module streams along with their profiles and status
module remove Removes a module profile including its packages
module reset Resets a module so that it is neither in enable nor in disable state
module update Updates packages in a module profile

For labs, you’ll need to create a definition file and configure access to the two repositories available on the RHEL 8 ISO image.

Lab: Configure Access to Pre-Built Repositories

Set up access to the two dnf repositories that are available on RHEL 9 image. (You should have already configured an automatic mounting of RHEL 9 image on /mnt.) Create a definition file for the repositories and confirm.

  1. Verify that the image is currently mounted:
 df -h | grep mnt
  1. Create a definition file called local.repo in /etc/yum.repos.d/ using the vim editor and define the following data for both repositories in it:
 [BaseOS] 
 name=BaseOS 
 baseurl=file:///mnt/BaseOS 
 gpgcheck=0 

 [AppStream] 
 name=AppStream 
 baseurl=file:///mnt/AppStream 
 gpgcheck=0
  1. Confirm access to the repositories:
 sudo dnf repolist 
  • Ignore lines 1-4 in the output that are related to subscription and system registration.
  • Lines 5 and 6 show the rate at which the command read the repo data.
  • Line 7 displays the timestamp of the last metadata check.
  • last two lines show the repo IDs, repo names, and a count of packages they hold.
  • AppStream repo consists of 4,672 packages
  • BaseOS repo contains 1,658 packages.
  • Both repos are enabled by default and are ready for use.

dnf yum Repository

dnf repository (yum repository or a repo)

  • Digital library for storing software packages

  • Repository is accessed for package retrieval, query, update, and installation

  • The two repositories

    • BaseOS and AppStream
      • come preconfigured with the RHEL 9 ISO image.
  • Number of other repositories available on the Internet that are maintained by software publishers such as Red Hat and CentOS.

  • Can build private custom repositories for internal IT use for stocking and delivering software.

    • Good practice for an organization with a large Linux server base, as it manages dependencies automatically and aids in maintaining software consistency across the board.
  • Can also be used to store in-house developed packages.

  • It is important to obtain software packages from authentic and reliable sources such as Red Hat to prevent potential damage to your system and to circumvent possible software corruption.

  • There is a process to create repositories and to access preconfigured repositories.

  • There are two pre-set repositories available on the RHEL 9 image. You will configure access to them via a definition file to support the exercises and lab environment.

Repository Definition File

  • Repo definition files are located in /etc/yum.repos.d/
  • Can create local.repo file in this directory to specify local repos
  • See dnf.conf man page

Sample repo definition file and key directives:

 [BaseOS_RHEL_9]
 name= RHEL 9 base operating system components
 baseurl=file://*mnt*BaseOS
 enabled=1
 gpgcheck=0

EXAM TIP:

  • Knowing how to configure a dnf/yum repository using a URL plays an important role in completing some of the RHCSA exam tasks successfully.
  • Use two forward slash characters (//) with the baseurl directive for an FTP, HTTP, or HTTPS source.

Five lines from a sample repo file: Line 1 defines an exclusive ID within the square brackets. Line 2 is a brief description of the repo with the “name” directive. Line 3 is the location of the repodata directory with the “baseurl” directive. Line 4 shows whether this repository is active. Line 5 shows if packages are to be GPGchecked for authenticity.

  • Each repository definition file must have:

    • Unique ID
    • Description
    • Baseurl directive defined
    • Other directives are set as required.
  • The baseurl directive for a local directory path is defined as file:///local_path

    • The first two forward slash characters represent the URL convention, and the third forward slash is for the absolute path to the destination directory)
    • FTP and
      • \ftp://hostname/network_path
    • HTTP(S)
      • http(s)://hostname/network_path
    • network path must include a resolvable hostname or an IP address.

Software Management with dnf

  • Tools are available to work with individual packages as well as package groups and modules.
  • rpm command is limited to managing one package at a time.
  • dnf has an associated configuration file that can define settings to control its behavior.

dnf Configuration File

  • Key configuration file: /etc/dnf/dnf.conf
  • “main” section - Sets directives that have a global effect on dnf operations.
  • Can define separate sections for each custom repository that you plan to set up on the system.
  • Preferred location to store configuration for each custom repository in their own definition files is in /etc/yum.repos.d
    • default location created for this purpose.

Default content of this configuration file:

 cat /etc/dnf/dnf.conf
 [main]
 gpgcheck=1
 installonly_limit=3
 clean_requirements_on_remove=True
 best=True
 skip_if_unavailable=False

The above and a few other directives that you may define in the file:

Directive Description
best Whether to install (or upgrade to) the latest available version.
clean_requirements_on_remove Whether to remove dependencies during a package removal process that are no longer in use.
debuglevel Sets debug from 1 (minimum) and 10 (maximum). Default is 2. A value of 0 disables this feature.
gpgcheck Whether to check the GPG signature for package authenticity. Default is 1 (enabled).
installonly_limit Count of packages that can be installed concurrently. Default is 3.
keepcache Defines whether to store the package and header cache following a successful installation. Default is 0 (disabled).
logdir Sets the directory location to store the log files. Default is /var/log/
obsoletes Checks and removes any obsolete dependent packages during installs and updates. Default is 1 (enabled).

For other directives: man 5 dnf.conf

Advanced Package Management DIY Labs

  1. Configure Access to RHEL 8 Repositories (Make sure the RHEL 8 ISO image is attached to the VM and mounted.) Create a definition file under /etc/yum.repos.d/, and define two blocks (one for BaseOS and another for AppStream).
  vim /etc/yum.repos.d/local.repo
 [BaseOS]
 name=BaseOS 
 baseurl=file:///mnt/BaseOS 
 gpgcheck=0 

 [AppStrean]
 name=AppStream 
 baseurl=file:///mnt/AppStream 
 gpgcheck=0
  1. Verify the configuration with dnf repolist. You should see numbers in thousands under the Status column for both repositories.
 dnf repolist -v

Lab: Install and Manage Individual Packages

  1. List all installed and available packages separately.
 dnf list --available && dnf list --installed
  1. Show which package contains the /etc/group file.
 dnf provides /etc/group
  1. Install the package httpd.
 dnf -y install httpd
  1. Review /var/log/yum.log/ for confirmation. (/var/lib/dnf/history)
  dnf history
  1. Perform the following on the httpd package:
  2. Show information
 dnf info httpd
  1. List dependencies
 dnf repoquery --requires httpd
  1. Remove it
 dnf remove httpd

Lab Install and Manage Package Groups

  1. List all installed and available package groups separately.
 dnf group list available && dnf group list installed
  1. Install package groups Security Tools and Scientific Support.
 dnf group install 'Security Tools'
  1. Review /var/log/yum.log for confirmation.
 dnf history
  1. Show the packages included in the Scientific Support package group, and delete this group.
 dnf group info 'Scientific Support' && dnf group  remove 'Scientific Support'

Lab: Install and Manage Modules

  1. List all modules. Identify which modules, streams and profiles are installed, default, disabled, and enabled from the output.
 dnf module list
  1. Install the default stream of the development profile for module php, and verify.
 dnf module install php && dnf module list
  1. Remove the module.
 dnf module remove php

Lab Switch Module Streams and Install Software

  1. List postgresql module. This will display the streams and profiles, and their status.
 dnf module list postgresql
  1. Reset both streams
 dnf module reset postgresql
  1. enable the stream for the older version, and install its client profile.
 dnf module install postgresql:15

Basic Package Management

RPM (Redhat Package Manager)

  • Specially formatted File(s) packaged together with the .rpm extension.
  • Packages included or available for RHEL are in rpm format.
  • Metadata info gets updated whenever a package is updated.

rpm command

  • Install, Upgrade, remove, query, freshen, or decompress packages.
  • Validate package authenticity and integrity.

Packages

  • Two types of packages binary (or installable) and source.

Binary packages

  • Installation ready
  • Bundled for distribution.
  • Have .rpm extension.
  • Contain:
    • install scripts (pre and post)
    • Executables
    • Configuration files
    • Library files
    • Dependency information
    • Where to install files
    • Documentation
      • How to install/uninstall
      • Man pages for config files/commands
      • Other install and usage info
    • Metadata
      • Stored in central location
      • Includes:
        • Package version
        • Install location
        • Checksum values
        • List of included files and their attributes
  • Package intelligence
    • Used by package administration toolset for successful completion of the package installation process.
    • May include info on:
      • prerequisites
      • User account setup
      • Needed directories/ soft links
    • Includes reverse process for uninstall

Package Naming

5 parts to a package name: 1. Name 2. Version 3. release (revision or build) 4. Linux version 5. Processor Architecture - noarch - platform independant - src - Source code packages

  • Always has .rpm extension
  • .rpm is removed after install Example: openssl-1.1.1-8.el8.x86_64.rpm,

Package Dependency

  • Dependency info is in the metadata
    • Read by package handling utilities

Package Database

  • Metadata for installed packages and package files is stored in /var/lib/rpm/
    • Package database
    • Referenced by package manipulation utilities to obtain:
      • package name and version data
      • Info about owerships, permissions, timestamps, and file sizes that are part of the package.
      • Contain info on dependencies.
      • Aids management commands in:
        • listing and querying packages
        • Verifying dependencies and file attributes.
        • Installing new packages.
        • Upgrading and uninstalling packages.
      • Removes and replaces metadata when a package is replaced.
      • Can maintain multiple version of a single package.

Package Management Tools

  • rpm (redhat package manager)
    • Does not automatically resolve dependencies.
  • yum (yellowdog update, modified)
    • Find, get, and install dependencies automatically.
    • softlink to dnf now.
  • dnf (dandified yum)

Package management with rpm

rpm package management tasks: - query - install - upgrade - freshen - overwrite - remove - extract - validate - verify

  • Works with installed and installable packages.

rpm command

Query options

Query and display packages
-q (--query)

List all installed packages
-qa (--query --all)

List config files in a package
-qc (--query --config-files)

List documentation files in a package
-qd (--query --docfiles)

Exhibit what package a file comes from
-qf (--query --file)

Show installed package info (Version, Size, Installation status, Date, Signature, Description, etc.) -qi (--query --info)

Show installable package info (Version, Size, Installation status, Date, Signature, Description, etc.) -qip (--query --info --package)

List all files in a package.
-ql (--query --list)

List files and packages a package depends on.
-qR (--query --requires)

List packages that provide the specified package or file.
-q --whatprovides

List packages that require the specified package or file.
-q --whatrequires

Package installation options

Remove a package
-e (--erase)

Upgrades installed package. Or loads if not installed.
-U (--upgrade)

Display detailed information
-v (--verbose or -vv)

Verify integrity of a package or package files
-V (--verify)

Querying packages

Query packages in the package database or at a specified location.

Installing a package

  • Creates directory structure needed
  • Installs files
  • Runs needed post installation steps
  • Installing package will fail if missing dependencies.
  • Error message will show missing dependencies.

Upgrading a package

  • Installs the package if previous version does not exist. (-U)
  • Makes backup of effected configuration files and adds .rpmsave extension.

Freshening a package

  • Older version must exist.
  • -F option
  • Will only work if a newer version of a package is available.

Overwriting a Package

  • Replaces existing files of a package with the same version.
  • –replacepkgs option.
  • Useful when you suspect corruption.

Removing a Package

  • Uninstalls package and associated files/ directories
  • -e Option
  • Checks to see if this package is a dependency for another program and fails if it is.

Extracting Files from an Installable Package

  • rpm2cpio command
  • -i (extract)
  • -d create directory structure. Useful for:
    • Examining package contents.
    • Replacing corrupt or lost command.
    • Replace critical configuration file to it’s original state

Package Integrity and Credibility

  • MD5 Checksum for verifying package integrity
  • GNU Privacy Guard Public Key (GNU Privacy Guard or GPG) for ensuring credibility of publisher.
  • PGP (Pretty Good Privacy) - commercial version of GPG.
  • --nosignature
    • Don’t verify package or header signatures when reading.
  • -K
    • keep package files after installation rpmkeys command
    • check credibility, import GPG key, and verify packages
  • Redhat signs their products and updates with a GPG key.
    • Files in installation media include public keys in the products for verification.
    • Copied to /etc/pki/rpm-gpg during OS installation. RPM-GPG-KEY-redhat-release
      • Used for packages shipped after November 2009 and their updates. RPM-GPG-KEY-redhat-beta
      • For Beta products shipped after November 2009.
  • Import the relevant GPG key and the verify the package to check the credibility of a package.

Viewing GPG Keys

  • view with rpm command rpm -q gpg-pubkey
  • -i option
    • show info about a key.

Verifying Package Attributes

  • Compare package file attributes with originals stored in package database at the time of installation.
  • -V option
    • compare owner, group, permission mode, size, modification time, digest, type, etc.
    • Returns to prompt if no changes are detected
    • -v or vv for verbose
  • -Vf
    • run the check directly on the file
  • Three columns of output:
    • Column 1
      • 9 fields
        • S = Different file size.
        • M = Mode or permission or file type change.
        • 5 = MD5 Checksum does not match.
        • D = Device file and its major and minor number have changed.
        • L = File is a symlink and it’s path has been altered.
        • U = Ownership has changed.
        • G = Group membership has been modified.
        • T = Timestamp changed.
        • P = Capabilities are altered.
        • . = No modifications detected.
    • Column 2
      • File type
        • c = Configuration file
        • d = Documentation File
        • g = Ghost FIle
        • l = License file
        • r = Readme file
    • Column 3
      • Full path of file

Basic Package Management Labs

Lab: Mount RHEL 9 ISO Persistently

  1. Go to the VirtualBox VM Manager and make sure that the RHEL 8 image is attached to RHEL9-VM1 as depicted below:

  2. Open the /etc/fstab file in the vim editor (or another editor of your choice) and add the following line entry at the end of the file to mount the DVD image (/dev/sr0) in read-only (ro) mode on the /mnt directory.

    /dev/sr0 /mnt iso9660 ro 0 0

Note: sr0 represents the first instance of the optical device and iso9660 is the standard format for optical file systems.

  1. Mount the file system as per the configuration defined in the /etc/fstab file using the mount command with the -a (all) option:

    sudo mount -a
  2. Verify the mount using the df command:

    df -h | grep mnt

Note: The image and the packages therein can now be accessed via the /mnt directory just like any other local directory on the system.

  1. List the two directories—/mnt/BaseOS/Packages and /mnt/AppStream/Packages—that contain all the software packages (directory names are case sensitive):

    ls -l /mnt/BaseOS/Packages | more

Lab: Query Packages (RPM)

  1. query all installed packages: rpm -qa

  2. query whether the perl package is installed: rpm -q perl

  3. list all files in a package: rpm -ql iproute

  4. list only the documentation files in a package: rpm -qd audit

  5. list only the configuration files in a package: rpm -qc cups

  6. identify which package owns the specified file: rpm -qf /etc/passwd

  7. display information about an installed package including version, release, installation status, installation date, size, signatures, description, and so on: rpm -qi setup

  8. list all file and package dependencies for a given package: rpm -qR chrony

  9. query an installable package for metadata information (version, release, architecture, description, size, signatures, etc.): rpm -qip /mnt/BaseOS/Packages/zsh-5.5.1-6.el8.x86_64.rpm

  10. determine what packages require the specified package in order to operate properly: rpm -q --whatrequires lvm2

Lab: Installing a Package (RPM)

  1. Install zsh-5.5.1-6.el8.x86_64.rpm sudo rpm -ivh /mnt/BaseOS/Packages/zsh-5.5.1-6.el8.x86_64.rpm

Lab: Upgrading a Package (RPM)

  1. Upgrade sushi with the -U option: sudo rpm -Uvh /mnt/AppStream/Packages/sushi-3.28.3-1.el8.x86_64.rpm

Lab: Freshening a Package

  1. Freshen the sushi package: sudo rpm -Fvh /mnt/AppStream/Packages/sushi-3.28.3-1.el8.x86_64.rpm

Lab: Overwriting a Package

  1. Overwrite zsh-5.5.1-6.el8.x86_64 sudo rpm -ivh --replacepkgs /mnt/BaseOS/Packages/zsh-5.5.1-6.el8.x86_64

Lab: Removing a Package

  1. Remove sushi sudo rpm sushi -ve

Lab: Extracting Files from an Installable Package

  1. You have lost /etc/crony.conf. Determine what package this file comes from: rpm -qf /etc/chrony.conf

  2. Extract all files from the crony package to /tmp and create the directory structure:

[root@server30 mnt]# cd /tmp

[sudo rpm2cpio /mnt/BaseOS/Packages/chrony-3.3-3.el8.x86_64.rpm | cpio -imd
1066 blocks](<[root@server30 tmp]# rpm2cpio /mnt/BaseOS/Packages/chrony-4.3-1.el9.x86_64.rpm | cpio -imd
1253 blocks>)
  1. Use find to locate the crony.conf file: sudo find . -name chrony.conf

  2. Copy the file to /etc:

Lab: Validating Package Integrity and Credibility

  1. Check the integrity of zsh-5.5.1-6.el8.x86_64.rpm located in /mnt/BaseOS/Packages: rpm -K /mnt/BaseOS/Packages/zsh-5.5.1-6.el8.x86_64.rpm --nosignature
  2. Import the GPG key from the proper file and verify the signature for the zsh-5.5.1-6.el8.x86_64.rpm package.
sudo rpmkeys --import /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
sudo rpmkeys -K /mnt/BaseOS/Packages/zsh-5.5.1-6.el8.x86_64.rpm

Lab: Viewing GPG Keys

  1. List the imported key: rpm -q gpg-pubkey
  2. View details for the first key: rpm -qi gpg-pubkey-fd431d51-4ae0493b

Lab: Verifying Package Attributes

  1. Run a check on the at program: sudo rpm -V at

  2. Change permissions of one of the files and run the check again:

ls -l /etc/sysconfig/atd
sudo chmod -v 770 /etc/sysconfig/atd
sudo rpm -V at
  1. Run the check directly on the file: sudo rpm -Vf /etc/sysconfig/atd

  2. Reset the value and check the file again:

sudo chmod -v 644 /etc/sysconfig/atd
sudo rpm -V at

Lab: Perform Package Management Using rpm

  1. Run the ls command on the /mnt/AppStream/Packages directory to confirm that the rmt package is available:
[root@server30 tmp]# ls -l /mnt/BaseOS/Packages/rmt*
-r--r--r--. 1 root root 49582 Nov 20  2021 /mnt/BaseOS/Packages/rmt-1.6-6.el9.x86_64.rpm
  1. Run the rpm command and verify the integrity and credibility of the package:
[root@server30 tmp]# rpmkeys -K /mnt/BaseOS/Packages/rmt-1.6-6.el9.x86_64.rpm
/mnt/BaseOS/Packages/rmt-1.6-6.el9.x86_64.rpm: digests signatures OK
  1. Install the Package:
[root@server30 tmp]# rpmkeys -K /mnt/BaseOS/Packages/rmt-1.6-6.el9.x86_64.rpm
/mnt/BaseOS/Packages/rmt-1.6-6.el9.x86_64.rpm: digests signatures OK
[root@server30 tmp]# rpm -ivh /mnt/BaseOS/Packages/rmt-1.6-6.el9.x86_64.rpm
Verifying...                         ################################# [100%])
Preparing...                         ################################# [100%])
Updating / installing...
   1:rmt-2:1.6-6.el9                 ################################# [100%])
  1. Show basic information about the package:
[root@server30 tmp]# rpm -qi rmt
Name        : rmt
Epoch       : 2
Version     : 1.6
Release     : 6.el9
Architecture: x86_64
Install Date: Sat 13 Jul 2024 09:02:08 PM MST
Group       : Unspecified
Size        : 88810
License     : CDDL
Signature   : RSA/SHA256, Sat 20 Nov 2021 08:46:44 AM MST, Key ID 199e2f91fd431d51
Source RPM  : star-1.6-6.el9.src.rpm
Build Date  : Tue 10 Aug 2021 03:13:47 PM MST
Build Host  : x86-vm-55.build.eng.bos.redhat.com
Packager    : Red Hat, Inc. <http://bugzilla.redhat.com/bugzilla>
Vendor      : Red Hat, Inc.
URL         : http://freecode.com/projects/star
Summary     : Provides certain programs with access to remote tape devices
Description :
The rmt utility provides remote access to tape devices for programs
like dump (a filesystem backup program), restore (a program for
restoring files from a backup), and tar (an archiving program).
  1. Show all the files the package contains:
[root@server30 tmp]# rpm -ql rmt
/etc/default/rmt
/etc/rmt
/usr/lib/.build-id
/usr/lib/.build-id/c2
/usr/lib/.build-id/c2/6a51ea96fc4b4367afe7d44d16f1405c3c7ec9
/usr/sbin/rmt
/usr/share/doc/star
/usr/share/doc/star/CDDL.Schily.txt
/usr/share/doc/star/COPYING
/usr/share/man/man1/rmt.1.gz
  1. List the documentation files the package has:
[root@server30 tmp]# rpm -qd rmt
/usr/share/doc/star/CDDL.Schily.txt
/usr/share/doc/star/COPYING
/usr/share/man/man1/rmt.1.gz
  1. Verify the attributes of each file in the package. Use verbose mode.
[root@server30 tmp]# rpm -vV rmt
.........  c /etc/default/rmt
.........    /etc/rmt
.........  a /usr/lib/.build-id
.........  a /usr/lib/.build-id/c2
.........  a /usr/lib/.build-id/c2/6a51ea96fc4b4367afe7d44d16f1405c3c7ec9
.........    /usr/sbin/rmt
.........    /usr/share/doc/star
.........  d /usr/share/doc/star/CDDL.Schily.txt
.........  d /usr/share/doc/star/COPYING
.........  d /usr/share/man/man1/rmt.1.gz
  1. Remove the package:
[root@server30 tmp]# rpm -ve rmt
Preparing packages...
rmt-2:1.6-6.el9.x86_64

Lab 9-1: Install and Verify Packages

As user1 with sudo on server3,

  • make sure the RHEL 9 ISO image is attached to the VM and mounted.
  • Use the rpm command and install the zsh package by specifying its full path.
[root@server30 Packages]# rpm -ivh /mnt/BaseOS/Packages/zsh-5.8-9.el9.x86_64.rpm 
Verifying...                         ################################# [100%])
Preparing...                         ################################# [100%])
	package zsh-5.8-9.el9.x86_64 is already installed
  • Run the rpm command again and perform the following on the zsh package:
  • (1) show information
[root@server30 Packages]# rpm -qi zsh
Name        : zsh
Version     : 5.8
Release     : 9.el9
Architecture: x86_64
Install Date: Sat 13 Jul 2024 06:49:40 PM MST
Group       : Unspecified
Size        : 8018363
License     : MIT
Signature   : RSA/SHA256, Thu 24 Feb 2022 08:59:15 AM MST, Key ID 199e2f91fd431d51
Source RPM  : zsh-5.8-9.el9.src.rpm
Build Date  : Wed 23 Feb 2022 07:10:14 AM MST
Build Host  : x86-vm-56.build.eng.bos.redhat.com
Packager    : Red Hat, Inc. <http://bugzilla.redhat.com/bugzilla>
Vendor      : Red Hat, Inc.
URL         : http://zsh.sourceforge.net/
Summary     : Powerful interactive shell
Description :
The zsh shell is a command interpreter usable as an interactive login
shell and as a shell script command processor.  Zsh resembles the ksh
shell (the Korn shell), but includes many enhancements.  Zsh supports
command line editing, built-in spelling correction, programmable
command completion, shell functions (with autoloading), a history
mechanism, and more.
  • (2) validate integrity
[root@server30 Packages]# rpm -K zsh-5.8-9.el9.x86_64.rpm
zsh-5.8-9.el9.x86_64.rpm: digests signatures OK
  • (3) display attributes [root@server30 Packages]# rpm -V zsh

Lab 9-2: Query and Erase Packages

As user1 with sudo on server3,

  • make sure the RHEL 9 ISO image is attached to the VM and mounted.
  • Use the rpm command to perform the following:
  • (1) check whether the setup package is installed
[root@server30 Packages]# rpm -q setup
setup-2.13.7-10.el9.noarch
  • (2) display the list of configuration files in the setup package
[root@server30 Packages]# rpm -qc setup
/etc/aliases
/etc/bashrc
/etc/csh.cshrc
/etc/csh.login
/etc/environment
/etc/ethertypes
/etc/exports
/etc/filesystems
/etc/fstab
/etc/group
/etc/gshadow
/etc/host.conf
/etc/hosts
/etc/inputrc
/etc/motd
/etc/networks
/etc/passwd
/etc/printcap
/etc/profile
/etc/profile.d/csh.local
/etc/profile.d/sh.local
/etc/protocols
/etc/services
/etc/shadow
/etc/shells
/etc/subgid
/etc/subuid
/run/motd
/usr/lib/motd
  • (3) show information for the zlib-devel package on the ISO image
[root@server30 Packages]# rpm -qi ./zlib-devel-1.2.11-40.el9.x86_64.rpm
Name        : zlib-devel
Version     : 1.2.11
Release     : 40.el9
Architecture: x86_64
Install Date: (not installed)
Group       : Unspecified
Size        : 141092
License     : zlib and Boost
Signature   : RSA/SHA256, Tue 09 May 2023 05:31:02 AM MST, Key ID 199e2f91fd431d51
Source RPM  : zlib-1.2.11-40.el9.src.rpm
Build Date  : Tue 09 May 2023 03:51:20 AM MST
Build Host  : x86-64-03.build.eng.rdu2.redhat.com
Packager    : Red Hat, Inc. <http://bugzilla.redhat.com/bugzilla>
Vendor      : Red Hat, Inc.
URL         : https://www.zlib.net/
Summary     : Header files and libraries for Zlib development
Description :
The zlib-devel package contains the header files and libraries needed
to develop programs that use the zlib compression and decompression
library.
  • (4) reinstall the zsh package (–reinstall -vh),
[root@server30 Packages]# rpm -hv --reinstall ./zsh-5.8-9.el9.x86_64.rpm
Verifying...                         ################################# [100%])
Preparing...                         ################################# [100%])
Updating / installing...
   1:zsh-5.8-9.el9                   ################################# [ 50%])
Cleaning up / removing...
   2:zsh-5.8-9.el9                   ################################# [100%])
  • (5) remove the zsh package. [root@server30 Packages]# rpm -e zsh

Subsections of Storage

AutoFS

AutoFS

  • Automatically mount and unmount on clients during runtime and system reboots.
  • Triggers mount or unmount action based on mount point activity.
  • Client-side service
  • Mount an NFS share on demand
  • Entry placed in AutoFS config files.
  • Automatically mounts share upon detecting activity in it’s mount point. (touch, ls, cd)
  • unmounts share if the share hasn’t been accessed for a predefined period of time.
  • Mounts managed with autofs should not be mounted manually via /etc/fstab to avoid inconsistencies.
  • Saves Kernel from having to maintain unused NFS shares. (Improved performance!)
  • NFS shares are defined in config files called maps (/etc/ or /etc/auto.master.d/)
  • Does not use /etc/fstab.
  • Does not require root to mount a share (fstab does).
  • Prevents client from hanging if share is down.
  • Share is unmounted if not accessed for 5 minutes (default)
  • Supports wildcard characters or environment variables.
  • Automount daemon
    • in the userland mounts configured shares automatically upon access.
    • invoked at system boot.
    • Reads AutoFS master map and create initial mount point entries. (not mounting yet)
    • Does not mount shares until user activity is detected.
    • Unmounts after set timeframe of inactivity.
  • Use the mount command on a share to verify the path of the AutoFS map, file system type, and options used during mount.

/etc/autofs.conf/ preset Directives: master_map_name=auto.master timeout = 300 negative_timeout = 60 mount_nfs_default_protocol = 4 logging = none

Additional directives:

master_map_name

  • Name of the master map. Default is /etc/auto.master timeout

  • Time in second to unmount a share. negative_timeout

  • Timeout (in seconds) value for failed mount attempts. (1 minute default) mount_nfs_default_protocol

  • Sets the NFS version used to mount shares. logging

  • Logging level (none, verbose, debug)

  • Default is none (disabled)

  • Normally left to their default values.

AutoFS Maps

  • Where AutoFS finds the shares to mount and their locations.
  • Also tells Autofs what options to use.

Map Types:

  • master
  • direct
  • indirect

Master Map

Define entries for indirect and direct maps.

  • /etc/auto.master is default
  • Default is defined in /etc/autofs.conf with master_map_name directive.
  • May be used to define entries for indirect and direct maps.
    • But it is recommended to store user-defined maps in /etc/auto.master.d/
      • AutoFS service parses this at startup.
  • You can append an option to auto.master but it will apply globally to all subentries in the specified map file.

Map entry format examples:

  /-                      /etc/auto.master.d/auto.direct   \# Line 1

  /misc                   /etc/auto.misc                   \# Line 2

Direct Map

/- /etc/auto.master.d/auto.direct <-- defines direct map and points to auto.direct for details

Mount shares on unrelated mount points

  • Always visible to users
  • Can exist with an indirect share under one parent directory
  • Accessing a directory containing many direct mount points mounts all shares.
  • Each direct map entry places a separate share entry to /etc/mtab
    • /etc/mtab maintains a list of all mounted file systems whether they are local or remote.
    • Updated whenever a local file system, removable file system, or a network share is mounted or unmounted.

Indirect Map

/misc /etc/auto.misc <-- indirect map and points to auto.misc for details

Automount removable filesystems

  • Mount point /misc precedes mount point entries in /etc/auto.miscq
  • Used to automount removable file systems (CD, DVD, USB disks, etc.)
  • Custom indirect map files should be located in /etc/auto.master.d/
  • Preferred over direct mount for mounting all shares under one common parent directory.
  • Become visible only after they have been accessed.
  • Local and indirect mounted shares cannot coexist under the same parent directory.
  • One entry in /etc/mtab gets added for each indirect map.
  • Usually better to use indirect map for automounting NFS shares.

Lab: Access NFS Share Using Direct Map (server10)

  1. Install Autofs
sudo dnf install -y autofs
  1. Create mount point /autodir using mkdir
sudo mkdir /autodir
  1. Add an entry to /etc/auto.master to point the AutoFS service to the auto.dir file for more information:
/- /etc/auto.master.d/auto.dir
  1. Create /etc/auto.master.d/auto.dir and add the mount point, NFS server, and share info:
/autodir server20:/common
  1. Start AutoFS service and enable it at startup:
sudo systemctl enable --now autofs
  1. Make sure AUtoFS service is running. Use -l and –no-pager options to show full details without piping the output to a pager program (pg)
sudo systemctl status autofs -l --no-pager
  1. Run ls on the mount point then verify the share is automounted and accessible with mount.
ls /autodir
mount | grep autodir
  1. Wait 5 minutes and run the mount command again to see it has disappeared.
mount | grep autodir

Exercise 16-4: Access NFS Share Using Indirect Map

  • configure an indirect map to automount the NFS share /common that is available from server20.
  • install the relevant software and set up AutoFS maps to support the automatic mounting.
  • Observe that the specified mount point “autoindir” is created automatically under /misc.

Note that /common is already mounted on the /local mount point via the fstab file and it is also configured via a direct map for automounting on /autodir. There should occur no conflict in configuration or functionality among the three.

1. Install the autofs software package if it is not already there:

2. Confirm the entry for the indirect map /misc in the /etc/auto.master file exists:

[root@server30 common]# grep ^/misc /etc/auto.master
/misc	/etc/auto.misc

3. Edit the /etc/auto.misc file and add the mount point, NFS server, and share information to it:

autoindir server30:/common

4. Start the AutoFS service now and set it to autostart at system reboots:

[root@server40 /]# systemctl enable --now autofs

5. Verify the operational status of the AutoFS service. Use the -l and --no-pager options to show full details without piping the output to a pager program (the pg command in this case):

[root@server40 /]# systemctl status autofs -l --no-pager


6. Run the ls command on the mount point /misc/autoindir and then grep for both auto.misc and autoindir on the mount command output to verify that the share is automounted and accessible:

[root@server40 /]# ls /misc/autoindir
test.text
[root@server40 /]# mount | egrep 'auto.misc|autoindir'
/etc/auto.misc on /misc type autofs (rw,relatime,fd=7,pgrp=3321,timeout=300,minproto=5,maxproto=5,indirect,pipe_ino=31779)
server30:/common on /misc/autoindir type nfs4 (rw,relatime,vers=4.2,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.0.40,local_lock=none,addr=192.168.0.30)
  • /misc/autoindir has been auto generated.
  • You can use the umbrella mount point /misc to mount additional auto-generated mount points.

Automounting User Home Directories \

AutoFS allows us to automount user home directories by exploiting two special characters in indirect maps.

asterisk (*)

  • Replaces the references to specific mount points

ampersand (&)

  • Substitutes the references to NFS servers and shared subdirectories.

  • With user home directories located under /home, on one or more NFS servers, the AutoFS service will connect with all of them simultaneously when a user attempts to log on to a client.

  • The service will mount only that specific user’s home directory rather than the entire /home.

  • The indirect map entry for this type of substitution is defined in an indirect map, such as /etc/auto.master.d/auto.home.

* -rw &:/home/&

  • With this entry in place, there is no need to update any AutoFS configuration files if additional NFS servers with /home shared are added or removed.

  • If user home directories are added or deleted, there will be no impact on the functionality of AutoFS.

  • If there is only one NFS server sharing the home directories, you can simply specify its name in lieu of the first & symbol in the above entry.

Exercise 16-5: Automount User Home Directories Using Indirect Map

There are two portions for this exercise. The first portion should be done on server20 (NFS server) and the second portion on server10 (NFS client) as user1 with sudo where required.

first portion

  • create a user account called user30 with UID 3000.
  • add the /home directory to the list of NFS shares so that it becomes available for remote mount.

second portion

  • create a user account called user30 with UID 3000, base directory /nfshome, and no home directory.
  • create an umbrella mount point called /nfshome for mounting the user home directory from the NFS server.
  • install the relevant software and establish an indirect map to automount the remote home directory of user30 under /nfshome.
  • observe that the home directory is automounted under /nfshome when you sign in as user30.

On NFS server server20:

1. Create a user account called user30 with UID 3000 (-u) and assign password “password1”:

[root@server30 common]# useradd -u 3000 user30
[root@server30 common]# echo password1 | sudo passwd --stdin user30
Changing password for user user30.
passwd: all authentication tokens updated successfully.

2. Edit the /etc/exports file and add an entry for /home (do not modify or remove the previous entry): /home server40(rw)

3. Export all the shares listed in the /etc/exports file:

[root@server30 common]# sudo exportfs -avr
exporting server40.example.com:/home
exporting server40.example.com:/common

On NFS client server10:

1. Install the autofs software package if it is not already there: dnf install autofs

2. Create a user account called user30 with UID 3000 (-u), base home directory location /nfshome (-b), no home directory (-M), and password “password1”:

[root@server40 misc]# sudo useradd -u 3000 -b /nfshome -M user30
[root@server40 misc]# echo password1 | sudo passwd --stdin user30

This is to ensure that the UID for the user is consistent on the server and the client to avoid access issues.

3. Create the umbrella mount point /nfshome to automount the user’s home directory:

sudo mkdir /nfshome

4. Edit the /etc/auto.master file and add the mount point and indirect map location to it: /nfshome /etc/auto.master.d/auto.home

5. Create the /etc/auto.master.d/auto.home file and add the following information to it: * -rw server30:/home/&

For multiple user setup, you can replace “user30” with the & character, but ensure that those users exist on both the server and the client with consistent UIDs.

6. Start the AutoFS service now and set it to autostart at system reboots. This step is not required if AutoFS is already running and enabled. systemctl enable --now autofs

7. Verify the operational status of the AutoFS service. Use the -l and --no-pager options to show full details without piping the output to a pager program (the pg command): systemctl status autofs -l --no-pager

8. Log in as user30 and run the pwd, ls, and df commands for verification:

[root@server40 nfshome]# su - user30
[user30@server40 ~]$ ls
user30.txt
[user30@server40 ~]$ pwd
/nfshome/user30
[user30@server40 ~]$ df -h
Filesystem             Size  Used Avail Use% Mounted on
devtmpfs               4.0M     0  4.0M   0% /dev
tmpfs                  888M     0  888M   0% /dev/shm
tmpfs                  356M  5.1M  351M   2% /run
/dev/mapper/rhel-root   17G  2.2G   15G  13% /
/dev/sda1              960M  344M  617M  36% /boot
tmpfs                  178M     0  178M   0% /run/user/0
server30:/common        17G  2.2G   15G  13% /local
server30:/home/user30   17G  2.2G   15G  13% /nfshome/user30

EXAM TIP: You may need to configure AutoFS for mounting a remote user home directory.

NFS DIY Labs

Lab: Configure NFS Share and Automount with Direct Map

  • As user1 with sudo on server30, share directory /sharenfs (create it) in read/write mode using NFS.
[root@server30 /]# mkdir /sharenfs
[root@server30 /]# chmod 777 /sharenfs
[root@server30 /]# vim /etc/exports

# Add -> /sharenfs server40(rw)

[root@server30 /]# dnf -y install nfs-utils
[root@server30 /]# firewall-cmd --permanent --add-service nfs
[root@server30 /]# firewall-cmd --reload
success

[root@server30 /]# systemctl --now enable nfs-server


[root@server30 /]# exportfs -av
exporting server40.example.com:/sharenfs
  • On server40 as user1 with sudo, install the AutoFS software and start the service.
[root@server40 nfshome]# dnf -y install autofs
  • Configure the master and a direct map to automount the share on /mntauto (create it).
[root@server40 ~]# vim /etc/auto.master
/- /etc/auto.master.d/auto.dir

[root@server40 ~]# vim /etc/auto.master.d/auto.dir
/mntauto server30:/sharenfs

[root@server40 /]# mkdir /mntauto

[root@server40 ~]# systemctl enable --now autofs
  • Run ls on /mntauto to trigger the mount.
[root@server40 /]# mount | grep mntauto
/etc/auto.master.d/auto.dir on /mntauto type autofs (rw,relatime,fd=10,pgrp=6211,timeout=300,minproto=5,maxproto=5,direct,pipe_ino=40247)
server30:/sharenfs on /mntauto type nfs4 (rw,relatime,vers=4.2,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.0.40,local_lock=none,addr=192.168.0.30)
  • Use df -h to confirm.
[root@server40 /]# df -h | grep mntauto
server30:/sharenfs      17G  2.2G   15G  13% /mntauto

Lab: Automount NFS Share with Indirect Map

  • As user1 with sudo on server40, configure the master and an indirect map to automount the share under /autoindir (create it).
[root@server40 /]# mkdir /autoindir

[root@server40 etc]# vim /etc/auto.master
/autoindir /etc/auto.misc

[root@server40 etc]# vim /etc/auto.misc
sharenfs server30:/common

[root@server40 etc]# systemctl restart autofs
  • Run ls on /autoindir/sharenfs to trigger the mount.
[root@server40 etc]# ls /autoindir/sharenfs
test.text
  • Use df -h to confirm.
[root@server40 etc]# df -h
Filesystem             Size  Used Avail Use% Mounted on
devtmpfs               4.0M     0  4.0M   0% /dev
tmpfs                  888M     0  888M   0% /dev/shm
tmpfs                  356M  5.1M  351M   2% /run
/dev/mapper/rhel-root   17G  2.2G   15G  13% /
/dev/sda1              960M  344M  617M  36% /boot
tmpfs                  178M     0  178M   0% /run/user/0
server30:/common        17G  2.2G   15G  13% /autoindir/sharenfs

Local File Systems and Swap

File Systems and File System Types

File systems

  • Can be optimized, resized, mounted, and unmounted independently.
  • Must be connected to the directory hierarchy in order to be accessed by users and applications.
  • Mounting may be accomplished automatically at system boot or manually as required.
  • Can be mounted or unmounted using their unique identifiers, labels, or device files.
  • Each file system is created in a discrete partition, VDO volume, or logical volume.
  • A typical production RHEL system usually has numerous file systems.
  • During OS installation, only two file systems— / and /boot —are created in the default disk layout, but you can design a custom disk layout and construct separate containers to store dissimilar information.
  • Typical additional file systems that may be created during an installation are /home, /opt, /tmp, /usr, and /var.
  • / and /boot—are required for installation and booting.

Storing disparate data in distinct file systems versus storing all data in a single file system offers the following advantages:

  • Make any file system accessible (mount) or inaccessible (unmount) to users independent of other file systems. This hides or reveals information contained in that file system.
  • Perform file system repair activities on individual file systems
  • Keep dissimilar data in separate file systems
  • Optimize or tune each file system independently
  • Grow or shrink a file system independent of other file systems

3 types of file systems:

  • disk-based, network-based, and memory-based.

Disk-based

  • Typically created on physical drives using SATA, USB, Fibre Channel, and other technologies.
  • store information persistently

Network-based

  • Essentially disk-based file systems shared over the network for remote access.
  • store information persistently

Memory-based

  • Virtual
  • Created at system startup and destroyed when the system goes down.
  • data saved in virtual file systems does not survive across system reboots.

Ext3

  • Disk based
  • The third generation of the extended filesystem.
  • Metadata journaling for faster recovery
  • Superior reliability
  • Creation of up to 32,000 subdirectories
  • supports larger file systems and bigger files than its predecessor

Ext4

  • Disk based
  • Successor to Ext3.
    • Supports all features of Ext3 in addition to:
      • Larger file system size
      • Bigger file size
      • Unlimited number of subdirectories
      • Metadata and quota journaling
      • Extended user attributes

XFS

  • Disk based
  • Highly scalable and high-performing 64-bit file system.
  • Supports:
    • Metadata journaling for faster crash recovery
    • Online defragmentation, expansion, quota journaling, and extended user attributes
  • default file system type in RHEL 9.

VFAT

  • Disk based
  • Used for post-Windows 95 file system formats on hard disks, USB drives, and floppy disks.

ISO9660

  • Disk based
  • Used for optical file systems such as CD and DVD.

NFS - (Network File System.)

  • Network based
  • Shared directory or file system for remote access by other Linux systems.

AutoFS (Auto File System)

  • Network based
  • NFS file system set to mount and unmount automatically on remote client systems.

Extended File Systems

  • First generation is obsolete and is no longer supported
  • Second, third, and fourth generations are currently available and supported.
  • Fourth generation is the latest in the series and is superior in features and enhancements to its predecessors.
  • Structure is built on a partition or logical volume at the time of file system creation.
  • Structure is divided into two sets:
    • first set holds the file system’s metadata and it is very tiny.
      • Superblock
        • keeps vital file system structural information:
          • type
          • size
          • status of the file system
          • number of data blocks it contains
          • automatically replicated and maintained at various known locations throughout the file system.
          • primary superblock
            • superblock at the beginning of the file system
          • backup superblocks.
            • I used to supplant the corrupted or lost primary superblock to bring the file system back to its normal state.
            • Copy of the primary
      • Inode table
        • maintains a list of index node (inode) numbers.
        • Each file is assigned an inode number at the time of its creation, and the inode number
          • holds the file’s attributes such as:
            • type
            • permissions
            • ownership
            • owning group
            • size
            • last access/modification time
            • holds and keeps track of the pointers to the actual data blocks where the file contents are located.
    • second set stores the actual data, and it occupies almost the entire partition or the logical volume (VDO and LVM) space.\

journaling

  • Supported by Ext3 and Ext4

  • Recover swiftly after a system crash.

  • keep track of recent changes in their metadata in a journal (or log).

  • Each metadata update is written in its entirety to the journal after completion.

  • The system peruses the journal of each extended file system following the reboot after a crash to determine if there are any errors

  • Lets the system recover the file system rapidly using the latest metadata information stored in its journal.

  • Ext3 that supports file systems up to 16TiB and files up to 2TiB,

  • Ext4 supports very large file systems up to 1EiB (ExbiByte) and files up to 16TiB (TebiByte).

    • Uses a series of contiguous physical blocks on the hard disk called extents, resulting in improved read and write performance with reduced fragmentation.
    • Supports extended user attributes, metadata and quota journaling, etc.

XFS File System

  • High-performing 64-bit extent-based journaling file system type.
  • Allows the creation of file systems and files up to 8EiB (ExbiByte).
  • Does not run file system checks at system boot
  • Relies on you to use the xfs_repair utility to manually fix any issues.
  • Sets the extended user attributes and certain mount options by default on new file systems.
  • Enables defragmentation on mounted and active file systems to keep as much data in contiguous blocks as possible for faster access.
  • Inability to shrink.
  • Uses journaling for metadata operations, guaranteeing the consistency of the file system against abnormal or forced unmounting.
  • Journal information is read and any pending metadata transactions are replayed when the XFS file system is remounted.
  • Speedy input/output performance.
  • Can be snapshot in a mounted, active state.

VFAT File System

  • Extension to the legacy FAT file system (FAT16)
  • Supports 255 characters in filenames including spaces and periods
  • Does not differentiate between lowercase and uppercase letters.
  • Primarily used on removable media, such as floppy and USB flash drives, for exchanging data between Linux and Windows.

ISO9660 File System

  • For removable optical disc media such as CD/DVD drives

File System Management

File System Administration Commands

  • Some are limited to their operations on the Extended, XFS, or VFAT file system type.
  • Others are general and applicable to all file system types.

Extended File System Management Commands

e2label

  • Modifies the label of a file system

tune2fs

  • Tunes or displays file system attributes

XFS Management Commands

xfs_admin

  • Tunes file system attributes

xfs_growfs

  • Extends the size of a file system

xfs_info

  • Exhibits information about a file system

General File System Commands

blkid

  • Displays block device attributes including their UUIDs and labels

df

  • Reports file system utilization

du

  • Calculates disk usage of directories and file systems

fsadm

  • Resizes a file system. This command is automatically invoked when the lvresize command is run with the -r switch.

lsblk

  • Lists block devices and file systems and their attributes including their UUIDs and labels

mkfs

  • Creates a file system. Use the -t option and specify ext3, ext4, vfat, or xfs file system type.

mount

  • Mount a file system for user access.
  • Display currently mounted file systems.

umount

  • Unmount a file system.

Mounting and Unmounting File Systems

  • File system must be connected to the directory structure at a desired attachment point, (mount point)
  • A mount point in essence is any empty directory that is created and used for this purpose.

Use the mount command to view information about xfs mounted file systems:

[root@server2 ~]# mount -t xfs
/dev/mapper/rhel-root on / type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota)
/dev/sda1 on /boot type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota)

Mount command

  • -t option
    • type.
  • Mount a file system to a mount point.
  • Performed with the root user privileges.
  • Requires the absolute pathnames of the file system block device and the mount point name.
  • Accepts the UUID or label of the file system in lieu of the block device name.
  • Mount all or a specific type of file system.
  • Upon successful mount, the kernel places an entry for the file system in the /proc/self/mounts file.
  • A mount point should be empty when an attempt is made to mount a file system on it, otherwise the content of the mount point will hide.
  • The mount point must not be in use or the mount attempt will fail.

auto (noauto)

  • Mounts (does not mount) the file system when the -a option is specified

defaults

  • Mounts a file system with all the default values (async, auto, rw, etc.)

_netdev

  • Used for a file system that requires network connectivity in place before it can be mounted. NFS is an example.

remount

  • Remounts an already mounted file system to enable or disable an option

ro (rw)

  • Mounts a file system read-only read/write)

umount Command

  • Detach a file system from the directory hierarchy and make it inaccessible to users and applications.
  • Expects the absolute pathname to the block device containing the file system or its mount point name in order to detach it.
  • Unmount all or a specific type of file system.
  • Kernel removes the corresponding file system entry from the /proc/self/mounts file after it has been successfully disconnected.

Determining the UUID of a File System

  • Extended and XFS file systems have a 128-bit (32 hexadecimal characters) UUID (Universally Unique IDentifier) assigned to it at the time of its creation.

  • UUIDs assigned to vfat file systems are 32-bit (8 hexadecimal characters) in length.

  • Assigning a UUID makes the file system unique among many other file systems that potentially exist on the system.

  • Persistent across system reboots.

  • Used by default in RHEL 9 in the /etc/fstab file for any file system that is created by the system in a standard partition.

  • RHEL attempts to mount all file systems listed in the /etc/fstab file at reboots.

  • Each file system has an associated device file and UUID, but may or may not have a corresponding label.

  • The system checks for the presence of each file system’s device file, UUID, or label, and then attempts to mount it.

Determine the UUID of /boot

[root@server2 ~]# lsblk | grep boot
├─sda1          8:1    0    1G  0 part /boot
[root@server2 ~]# sudo xfs_admin -u /dev/sda1
UUID = 630568e1-608f-4603-9b97-e27f82c7d4b4

[root@server2 ~]# sudo blkid /dev/sda1
/dev/sda1: UUID="630568e1-608f-4603-9b97-e27f82c7d4b4" TYPE="xfs" PARTUUID="7dcb43e4-01"

[root@server2 ~]# sudo lsblk -f /dev/sda1
NAME FSTYPE FSVER LABEL UUID                                 FSAVAIL FSUSE% MOUNTPOINTS
sda1 xfs                630568e1-608f-4603-9b97-e27f82c7d4b4  616.1M    36% /boot

For extended file systems, you can use the tune2fs, blkid, or lsblk commands to determine the UUID.

A UUID is also assigned to a file system that is created in a VDO or LVM volume; however, it need not be used in the fstab file, as the device files associated with the logical volumes are always unique and persistent.

Labeling a File System

  • A unique label may be used instead of a UUID to keep the file system association with its device file exclusive and persistent across system reboots.
  • A label is limited to a maximum of 12 characters on the XFS file system
  • 16 characters on the Extended file system.
  • By default, no labels are assigned to a file system at the time of its creation.

The /boot file system is located in the /dev/sda1 partition and its type is XFS. You can use the xfs_admin or the lsblk command as follows to determine its label:

[root@server2 ~]# sudo xfs_admin -l /dev/sda1
label = ""

[root@server2 ~]# sudo lsblk -f /dev/sda1
NAME FSTYPE FSVER LABEL UUID                                 FSAVAIL FSUSE% MOUNTPOINTS
sda1 xfs                630568e1-608f-4603-9b97-e27f82c7d4b4  616.1M    36% /boot
  • Not needed on a file system if you intend to use its UUID or if it is created in a logical volume
  • You can still apply one using the xfs_admin command with the -L option.
  • Labeling an XFS file system requires that the target file system be unmounted.

unmount /boot, set the label “bootfs” on its device file, and remount it:

[root@server2 ~]# sudo umount /boot
[root@server2 ~]# sudo xfs_admin -L bootfs /dev/sda1
writing all SBs
new label = "bootfs"

Confirm the new label by executing sudo xfs_admin -l /dev/sda1 or sudo lsblk -f /dev/sda1.

For extended file systems, you can use the e2label command to apply a label and the tune2fs, blkid, and lsblk commands to view and verify.

Now you can replace the UUID=\"22d05484-6ae1-4ef8-a37d-abab674a5e35" for /boot in the fstab file with LABEL=bootfs, and unmount and remount /boot as demonstrated above for confirmation.

[root@server2 ~]# mount /boot
mount: (hint) your fstab has been modified, but systemd still uses
       the old version; use 'systemctl daemon-reload' to reload.

A label may also be applied to a file system created in a logical volume; however, it is not recommended for use in the fstab file, as the device files for logical volumes are always unique and remain persistent across system reboots.

Automatically Mounting a File System at Reboots

/etc/fstab

  • File systems defined in the /etc/fstab file are mounted automatically at reboots.
  • Must contain proper and complete information for each listed file system.
  • An incomplete or inaccurate entry might leave the system in an undesirable or unbootable state.
  • Only need to specify one of the four attributes
    • Block device name
    • UUID
    • label
    • mount point
  • The mount command obtains the rest of the information from this file.
  • Only need to specify one of these attributes with the umount command to detach it from the directory hierarchy.
  • Contains entries for file systems that are created at the time of installation.
[root@server2 ~]# cat /etc/fstab

#
# /etc/fstab
# Created by anaconda on Sun Feb 25 12:11:47 2024
#
# Accessible filesystems, by reference, are maintained under '/dev/disk/'.
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.
#
# After editing this file, run 'systemctl daemon-reload' to update systemd
# units generated from this file.
#
/dev/mapper/rhel-root   /                       xfs     defaults        0 0
LABEL=bootfs /boot                   xfs     defaults        0 0
/dev/mapper/rhel-swap   none                    swap    defaults        0 0

EXAM TIP: Any missing or invalid entry in this file may render the system unbootable. You will have to boot the system in emergency mode to fix this file. Ensure that you understand each field in the file for both file system and swap entries.

The format of this file is such that each row is broken out into six columns to identify the required attributes for each file system to be successfully mounted. Here is what the columns contain:

Column 1:

  • physical or virtual device path where the file system is resident, or its associated UUID or label.
  • can be entries for network file systems here as well.

Column 2:

  • Identifies the mount point for the file system.
  • swap partitions, use either “none” or “swap”.

Column 3:

  • Type of file system such as Ext3, Ext4, XFS, VFAT, or ISO9660.
  • For swap, the type “swap” is used.
  • may use “auto” instead to leave it up to the mount command to determine the type of the file system.

Column 4:

  • Identifies one or more comma-separated options to be used when mounting the file system.
  • Consult the manual pages of the mount command or the fstab file for additional options and details.

Column 5:

  • Used by the dump utility to ascertain the file systems that need to be dumped.
  • Value of 0 (or the absence of this column) disables this check.
  • This field is applicable only on Extended file systems;
  • XFS does not use it.

Column 6:

  • Sequence number in which to run the e2fsck (file system check and repair utility for Extended file system types) utility on the file system at system boot.

  • By default, 0 is used for memory-based, remote, and removable file systems, 1 for /, and 2 for /boot and other physical file systems. 0 can also be used for /, /boot, and other physical file systems you don’t want to be checked or repaired.

  • Applicable only on Extended file systems;

  • XFS does not use it.

  • 0 in columns 5 and 6 for XFS, virtual, remote, and removable file system types has no meaning. You do not need to add them for these file system types.

Lab: Create and Mount Ext4, VFAT, and XFS File Systems in Partitions (server2)

  • Create 2 x 100MB partitions on the /dev/sdb disk,
  • initialize them separately with the Ext4 and VFAT file system types,
  • define them for persistence using their UUIDs,
  • create mount points called /ext4fs1 and /vfatfs1,
  • attach them to the directorystructure
  • verify their availability and usage
  • you will use the disk /dev/sdc and repeat the above procedure to establish an XFS file system in it and mount it on /xfsfs1.

1. Apply the label “msdos” to the sdb disk using the parted command:

[root@server20 ~]# sudo parted /dev/sdb mklabel msdos
Warning: The existing disk label on /dev/sdb will be destroyed and all data on this disk will be
lost. Do you want to continue?
Yes/No? y                                                                 
Information: You may need to update /etc/fstab.

2. Create 2 x 100MB primary partitions on sdb with the parted command:

[root@server20 ~]# sudo parted /dev/sdb mkpart primary 1 101m
Information: You may need to update /etc/fstab.

[root@server20 ~]# sudo parted /dev/sdb mkpart primary 102 201m
Information: You may need to update /etc/fstab.

3. Initialize the first partition (sdb1) with Ext4 file system type using the mkfs command:

[root@server20 ~]# sudo mkfs -t ext4 /dev/sdb1
mke2fs 1.46.5 (30-Dec-2021)
/dev/sdb1 contains a LVM2_member file system
Proceed anyway? (y,N) y
Creating filesystem with 97280 1k blocks and 24288 inodes
Filesystem UUID: 73db0582-7183-42aa-951d-2f48b7712597
Superblock backups stored on blocks: 
	8193, 24577, 40961, 57345, 73729

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done 

4. Initialize the second partition (sdb2) with VFAT file system type using the mkfs command:

[root@server20 ~]# sudo mkfs -t vfat /dev/sdb2
mkfs.fat 4.2 (2021-01-31)

5. Initialize the whole disk (sdc) with the XFS file system type using the mkfs.xfs command. Add the -f flag to force the removal of any old partitioning or labeling information from the disk.

[root@server20 ~]# sudo mkfs.xfs /dev/sdc -f 
Filesystem should be larger than 300MB.
Log size should be at least 64MB.
Support for filesystems like this one is deprecated and they will not be supported in future releases.
meta-data=/dev/sdc               isize=512    agcount=4, agsize=16000 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1    bigtime=1 inobtcount=1 nrext64=0
data     =                       bsize=4096   blocks=64000, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=1368, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

6. Determine the UUIDs for all three file systems using the lsblk command:

[root@server2 ~]# lsblk -f /dev/sdb /dev/sdc
NAME   FSTYPE FSVER LABEL UUID                                 FSAVAIL FSUSE% MOUNTPOINTS
sdb                                                                           
├─sdb1 ext4   1.0         0bdd22d0-db53-40bb-8cc7-36efc9184196                
└─sdb2 vfat   FAT16       FB3A-6572                                           
sdc    xfs                91884326-9686-4569-96fa-9adb02c1f6f4>)

7. Open the /etc/fstab file, go to the end of the file, and append entries for the file systems for persistence using their UUIDs:

UUID=0bdd22d0-db53-40bb-8cc7-36efc9184196 /ext4fs1 ext4 defaults 0 0                
UUID=FB3A-6572 /vfatfs1 vfat defaults 0 0                                          
UUID=91884326-9686-4569-96fa-9adb02c1f6f4 /xfsfs1 xfs defaults 0 0

8. Create mount points /ext4fs1, /vfatfs1, and /xfsfs1 for the three file systems using the mkdir command: [root@server2 ~]# sudo mkdir /ext4fs1 /vfatfs1 /xfsfs1

9. Mount the new file systems using the mount command. This command will fail if there are any invalid or missing information in the file.

[root@server2 ~]# sudo mount -a
mount: (hint) your fstab has been modified, but systemd still uses
       the old version; use 'systemctl daemon-reload' to reload.

10. View the mount and availability status as well as the types of all three file systems using the df command:

[root@server2 ~]# df -hT
Filesystem            Type      Size  Used Avail Use% Mounted on
devtmpfs              devtmpfs  4.0M     0  4.0M   0% /dev
tmpfs                 tmpfs     888M     0  888M   0% /dev/shm
tmpfs                 tmpfs     356M  5.1M  351M   2% /run
/dev/mapper/rhel-root xfs        17G  2.0G   15G  12% /
/dev/sda1             xfs       960M  344M  617M  36% /boot
tmpfs                 tmpfs     178M     0  178M   0% /run/user/0
/dev/sdb1             ext4       84M   14K   77M   1% /ext4fs1
/dev/sdb2             vfat       95M     0   95M   0% /vfatfs1
/dev/sdc              xfs       245M   15M  231M   6% /xfsfs1

Lab: Create and Mount Ext4 and XFS File Systems in LVM Logical Volumes (server2)

  • Create a volume group called vgfs comprised of a 172MB physical volume created in a partition on the /dev/sdd disk.
  • The PE size for the volume group should be set at 16MB.
  • Create two logical volumes called ext4vol and xfsvol of sizes 80MB each and initialize them with the Ext4 and XFS file system types.
  • Ensure that both file systems are persistently defined using their logical volume device filenames.
  • Create mount points called /ext4fs2 and /xfsfs2,
  • Mount the file systems.
  • Verify their availability and usage.

1. Create a 172MB partition on the sdd disk using the parted command:

[root@server2 ~]# sudo parted /dev/sdd mkpart pri 1 172m
Information: You may need to update /etc/fstab.

2. Initialize the sdd1 partition for use in LVM using the pvcreate command:

[root@server2 ~]# sudo pvcreate /dev/sdd1
  Device /dev/sdb2 has updated name (devices file /dev/sdd2)
  Device /dev/sdb1 has no PVID (devices file brKVLFEG3AoBzhWoso0Sa1gLYHgNZ4vL)
  Physical volume "/dev/sdd1" successfully created.

3. Create the volume group vgfs with a PE size of 16MB using the physical volume sdd1:

[root@server2 ~]# sudo vgcreate -s 16 vgfs /dev/sdd1
  Volume group "vgfs" successfully created

The PE size is not easy to alter after a volume group creation, so ensure it is defined as required at creation.

4. Create two logical volumes ext4vol and xfsvol of size 80MB each in vgfs using the lvcreate command:

[root@server2 ~]# sudo lvcreate -n ext4vol -L 80 vgfs
  Logical volume "ext4vol" created.
  
[root@server2 ~]# sudo lvcreate  -n xfsvol -L 80 vgfs
  Logical volume "xfsvol" created.

5. Format the ext4vol logical volume with the Ext4 file system type using the mkfs.ext4 command:

[root@server2 ~]# sudo mkfs.ext4 /dev/vgfs/ext4vol
mke2fs 1.46.5 (30-Dec-2021)
Creating filesystem with 81920 1k blocks and 20480 inodes
Filesystem UUID: 4ed1fef7-2164-485b-8035-7f627cd59419
Superblock backups stored on blocks: 
	8193, 24577, 40961, 57345, 73729

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done

You can also use sudo mkfs -t ext4 /dev/vgfs/ext4vol.

6. Format the xfsvol logical volume with the XFS file system type using the mkfs.xfs command:

[root@server2 ~]# sudo mkfs.xfs /dev/vgfs/xfsvol
Filesystem should be larger than 300MB.
Log size should be at least 64MB.
Support for filesystems like this one is deprecated and they will not be supported in future releases.
meta-data=/dev/vgfs/xfsvol       isize=512    agcount=4, agsize=5120 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1    bigtime=1 inobtcount=1 nrext64=0
data     =                       bsize=4096   blocks=20480, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=1368, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

You may also use sudo mkfs -t xfs /dev/vgfs/xfsvol instead.

7. Open the /etc/fstab file, go to the end of the file, and append entries for the file systems for persistence using their device files:

/dev/vgfs/ext4vol /ext4fs2 ext4 defaults 0 0
/dev/vgfs/xfsvol /xfsfs2 xfs defaults 0 0

8. Create mount points /ext4fs2 and /xfsfs2 using the mkdir command: [root@server2 ~]# sudo mkdir /ext4fs2 /xfsfs2

9. Mount the new file systems using the mount command. This command will fail if there is any invalid or missing information in the file.

[root@server2 ~]# sudo mount -a
mount: (hint) your fstab has been modified, but systemd still uses
       the old version; use 'systemctl daemon-reload' to reload.

10. View the mount and availability status as well as the types of the new LVM file systems using the lsblk and df commands:

[root@server2 ~]# lsblk /dev/sdd
NAME             MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
sdd                8:48   0  250M  0 disk 
└─sdd1             8:49   0  163M  0 part 
  ├─vgfs-ext4vol 253:2    0   80M  0 lvm  /ext4fs2
  └─vgfs-xfsvol  253:3    0   80M  0 lvm  /xfsfs2
[root@server2 ~]# df -hT | grep fs2
/dev/mapper/vgfs-ext4vol ext4       70M   14K   64M   1% /ext4fs2
/dev/mapper/vgfs-xfsvol  xfs        75M  4.8M   70M   7% /xfsfs2

Lab: Resize Ext4 and XFS File Systems in LVM Logical Volumes (server 2)

  • Grow the size of the vgfs volume group that was created in the last lab by adding the whole sde disk to it.
  • Extend the ext4vol logical volume along with the file system it contains by 40MB using two separate commands.
  • Extend the xfsvol logical volume along with the file system it contains by 40MB using a single command.
  • Verify the new extensions.

1. Initialize the sde disk and add it to the vgfs volume group:

sde had a gpt partition table with no partitions ran the following to reset it:

[root@server2 ~]# dd if=/dev/zero of=/dev/sde bs=1M count=2 conv=fsync
2+0 records in
2+0 records out
2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0102036 s, 206 MB/s
[root@server2 ~]# sudo partprobe /dev/sde
[root@server2 ~]# sudo pvcreate /dev/sde
  Physical volume "/dev/sde" successfully created.
[root@server2 ~]# sudo pvcreate /dev/sde
  Physical volume "/dev/sde" successfully created.
[root@server2 ~]# sudo vgextend vgfs /dev/sde
  Volume group "vgfs" successfully extended

2. Confirm the new size of vgfs using the vgs and vgdisplay commands:

[root@server2 ~]# sudo vgs
  VG   #PV #LV #SN Attr   VSize   VFree  
  rhel   1   2   0 wz--n- <19.00g      0 
  vgfs   2   2   0 wz--n- 400.00m 240.00m
[root@server2 ~]# vgdisplay vgfs
  Devices file sys_wwid t10.ATA_VBOX_HARDDISK_VBa5e3cbf7-10921e08 PVID qeP9dCevNnTy422I8p18NxDKQ2WyDodU last seen on /dev/sdf1 not found.
  --- Volume group ---
  VG Name               vgfs
  System ID             
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  4
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               400.00 MiB
  PE Size               16.00 MiB
  Total PE              25
  Alloc PE / Size       10 / 160.00 MiB
  Free  PE / Size       15 / 240.00 MiB
  VG UUID               amDADJ-I4dH-jQUF-RFcE-58iL-jItl-5ti6LS

There are now two physical volumes in the volume group and the total size increased to 400MiB.

3. Grow the logical volume ext4vol and the file system it holds by 40MB using the lvextend and fsadm command pair. Make sure to use an uppercase L to specify the size. The default unit is MiB. The plus sign (+) signifies an addition to the current size.

[root@server2 ~]# sudo lvextend -L +40 /dev/vgfs/ext4vol
  Rounding size to boundary between physical extents: 48.00 MiB.
  Size of logical volume vgfs/ext4vol changed from 80.00 MiB (5 extents) to 128.00 MiB (8 extents).
  Logical volume vgfs/ext4vol successfully resized.
  
[root@server2 ~]# sudo fsadm resize /dev/vgfs/ext4vol
resize2fs 1.46.5 (30-Dec-2021)
Filesystem at /dev/mapper/vgfs-ext4vol is mounted on /ext4fs2; on-line resizing required
old_desc_blocks = 1, new_desc_blocks = 1
The filesystem on /dev/mapper/vgfs-ext4vol is now 131072 (1k) blocks long.

The resize subcommand instructs the fsadm command to grow the file system to the full length of the specified logical volume.

4. Grow the logical volume xfsvol and the file system (-r) it holds by (+) 40MB using the lvresize command:

[root@server2 ~]# sudo lvresize -r -L +40 /dev/vgfs/xfsvol
  Rounding size to boundary between physical extents: 48.00 MiB.
  Size of logical volume vgfs/xfsvol changed from 80.00 MiB (5 extents) to 128.00 MiB (8 extents).
  File system xfs found on vgfs/xfsvol mounted at /xfsfs2.
  Extending file system xfs to 128.00 MiB (134217728 bytes) on vgfs/xfsvol...
xfs_growfs /dev/vgfs/xfsvol
meta-data=/dev/mapper/vgfs-xfsvol isize=512    agcount=4, agsize=5120 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1    bigtime=1 inobtcount=1 nrext64=0
data     =                       bsize=4096   blocks=20480, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=1368, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 20480 to 32768
xfs_growfs done
  Extended file system xfs on vgfs/xfsvol.
  Logical volume vgfs/xfsvol successfully resized.

5. Verify the new extensions to both logical volumes using the lvs command. You may also issue the lvdisplay or vgdisplay command instead.

[root@server2 ~]# sudo lvs | grep vol
  Devices file sys_wwid t10.ATA_VBOX_HARDDISK_VBa5e3cbf7-10921e08 PVID qeP9dCevNnTy422I8p18NxDKQ2WyDodU last seen on /dev/sdf1 not found.
  ext4vol vgfs -wi-ao---- 128.00m                                                    
  xfsvol  vgfs -wi-ao---- 128.00m   

6. Check the new sizes and the current mount status for both file systems using the df and lsblk commands:

[root@server2 ~]# df -hT | grep -E 'ext4vol|xfsvol'
/dev/mapper/vgfs-xfsvol  xfs       123M  5.4M  118M   5% /xfsfs2
/dev/mapper/vgfs-ext4vol ext4      115M   14K  107M   1% /ext4fs2
[root@server2 ~]# lsblk /dev/sdd /dev/sde
NAME             MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
sdd                8:48   0  250M  0 disk 
└─sdd1             8:49   0  163M  0 part 
  ├─vgfs-ext4vol 253:2    0  128M  0 lvm  /ext4fs2
  └─vgfs-xfsvol  253:3    0  128M  0 lvm  /xfsfs2
sde                8:64   0  250M  0 disk 
├─vgfs-ext4vol   253:2    0  128M  0 lvm  /ext4fs2
└─vgfs-xfsvol    253:3    0  128M  0 lvm  /xfsfs2

Lab: Create and Mount XFS File System in LVM VDO Volume

  • Create an LVM VDO volume called lvvdo of virtual size 20GB on the 5GB sdf disk in a volume group called vgvdo1.
  • Initialize the volume with the XFS file system type.
  • Define it for persistence using its device files.
  • Create a mount point called /xfsvdo1, attach it to the directory structure.
  • verify its availability and usage.\

1. Initialize the sdf disk using the pvcreate command:

[root@server2 ~]# sudo pvcreate /dev/sdf
  WARNING: adding device /dev/sdf with idname t10.ATA_VBOX_HARDDISK_VBa5e3cbf7-10921e08 which is already used for missing device.
  Physical volume "/dev/sdf" successfully created.

2. Create vgvdo1 volume group using the vgcreate command:

[root@server2 ~]# sudo vgcreate vgvdo1 /dev/sdf
  WARNING: adding device /dev/sdf with idname t10.ATA_VBOX_HARDDISK_VBa5e3cbf7-10921e08 which is already used for missing device.
  Volume group "vgvdo1" successfully created

3. Display basic information about the volume group:

root@server2 ~]# sudo vgdisplay vgvdo1
  Devices file sys_wwid t10.ATA_VBOX_HARDDISK_VBa5e3cbf7-10921e08 PVID qeP9dCevNnTy422I8p18NxDKQ2WyDodU last seen on /dev/sdf1 not found.
  --- Volume group ---
  VG Name               vgvdo1
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <5.00 GiB
  PE Size               4.00 MiB
  Total PE              1279
  Alloc PE / Size       0 / 0   
  Free  PE / Size       1279 / <5.00 GiB
  VG UUID               b9u8Ng-m3BF-Jz2b-sBu8-gEG1-bBGQ-sBgrt0

4. Create a VDO volume called lvvdo1 using the lvcreate command. Use the -l option to specify the number of logical extents (1279) to be allocated and the -V option for the amount of virtual space (20GB).

[root@server2 ~]# sudo lvcreate -n lvvdo -l 1279 -V 20G --type vdo vgvdo1
WARNING: vdo signature detected on /dev/vgvdo1/vpool0 at offset 0. Wipe it? [y/n]: y
  Wiping vdo signature on /dev/vgvdo1/vpool0.
    The VDO volume can address 2 GB in 1 data slab.
    It can grow to address at most 16 TB of physical storage in 8192 slabs.
    If a larger maximum size might be needed, use bigger slabs.
  Logical volume "lvvdo" created.

5. Display detailed information about the volume group including the logical volume and the physical volume:

[root@server2 ~]# sudo vgdisplay -v vgvdo1
  Devices file sys_wwid t10.ATA_VBOX_HARDDISK_VBa5e3cbf7-10921e08 PVID qeP9dCevNnTy422I8p18NxDKQ2WyDodU last seen on /dev/sdf1 not found.
  --- Volume group ---
  VG Name               vgvdo1
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  3
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <5.00 GiB
  PE Size               4.00 MiB
  Total PE              1279
  Alloc PE / Size       1279 / <5.00 GiB
  Free  PE / Size       0 / 0   
  VG UUID               b9u8Ng-m3BF-Jz2b-sBu8-gEG1-bBGQ-sBgrt0
   
  --- Logical volume ---
  LV Path                /dev/vgvdo1/vpool0
  LV Name                vpool0
  VG Name                vgvdo1
  LV UUID                nTPKtv-3yTW-J7Cy-HVP1-Aujs-cXZ6-gdS2fI
  LV Write Access        read/write
  LV Creation host, time server2, 2024-07-01 12:57:56 -0700
  LV VDO Pool data       vpool0_vdata
  LV VDO Pool usage      60.00%
  LV VDO Pool saving     100.00%
  LV VDO Operating mode  normal
  LV VDO Index state     online
  LV VDO Compression st  online
  LV VDO Used size       <3.00 GiB
  LV Status              NOT available
  LV Size                <5.00 GiB
  Current LE             1279
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
   
  --- Logical volume ---
  LV Path                /dev/vgvdo1/lvvdo
  LV Name                lvvdo
  VG Name                vgvdo1
  LV UUID                Z09BdK-ETJk-Gi53-m8Cg-mnTd-RYug-Z9nV0L
  LV Write Access        read/write
  LV Creation host, time server2, 2024-07-01 12:58:02 -0700
  LV VDO Pool name       vpool0
  LV Status              available
  # open                 0
  LV Size                20.00 GiB
  Current LE             5120
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:6
   
  --- Physical volumes ---
  PV Name               /dev/sdf     
  PV UUID               WKc956-Xp66-L8v9-VA6S-KWM5-5e3X-kx1v0V
  PV Status             allocatable
  Total PE / Free PE    1279 / 0

6. Display the new VDO volume creation using the lsblk command:

[root@server2 ~]# sudo lsblk /dev/sdf
NAME                    MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sdf                       8:80   0   5G  0 disk 
└─vgvdo1-vpool0_vdata   253:4    0   5G  0 lvm  
  └─vgvdo1-vpool0-vpool 253:5    0  20G  0 lvm  
    └─vgvdo1-lvvdo      253:6    0  20G  0 lvm  

The output shows the virtual volume size (20GB) and the underlying disk size (5GB).

7. Initialize the VDO volume with the XFS file system type using the mkfs.xfs command. The VDO volume device file is /dev/mapper/vgvdo1-lvvdo as indicated in the above output. Add the -f flag to force the removal of any old partitioning or labeling information from the disk.

[root@server2 mapper]# sudo mkfs.xfs /dev/mapper/vgvdo1-lvvdo
meta-data=/dev/mapper/vgvdo1-lvvdo isize=512    agcount=4, agsize=1310720 blks
         =                       sectsz=4096  attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1    bigtime=1 inobtcount=1 nrext64=0
data     =                       bsize=4096   blocks=5242880, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=16384, version=2
         =                       sectsz=4096  sunit=1 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
Discarding blocks...Done.

(lab said vgvdo1-lvvdo1 but it didn’t exist for me.)

8. Open the /etc/fstab file, go to the end of the file, and append the following entry for the file system for persistent mounts using its device file:

/dev/mapper/vgvdo1-lvvdo /xfsvdo1 xfs defaults 0 0 

9. Create the mount point /xfsvdo1 using the mkdir command:

[root@server2 mapper]# sudo mkdir /xfsvdo1

10. Mount the new file system using the mount command. This command will fail if there are any invalid or missing information in the file.

[root@server2 mapper]# sudo mount -a
mount: (hint) your fstab has been modified, but systemd still uses
       the old version; use 'systemctl daemon-reload' to reload.

The mount command with the -a flag is a validation test for the fstab file. It should always be executed after updating this file and before rebooting the server to avoid landing the system in an unbootable state.

11. View the mount and availability status as well as the type of the VDO file system using the lsblk and df commands:

[root@server2 mapper]# lsblk /dev/sdf
NAME                    MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sdf                       8:80   0   5G  0 disk 
└─vgvdo1-vpool0_vdata   253:4    0   5G  0 lvm  
  └─vgvdo1-vpool0-vpool 253:5    0  20G  0 lvm  
    └─vgvdo1-lvvdo      253:6    0  20G  0 lvm  /xfsvdo1

[root@server2 mapper]# df -hT /xfsvdo1
Filesystem               Type  Size  Used Avail Use% Mounted on
/dev/mapper/vgvdo1-lvvdo xfs    20G  175M   20G   1% /xfsvdo1

Monitoring File System Usage

df (disk free) command

  • reports usage details for mounted file systems.
  • reports the numbers in KBs unless the -m or -h option is specified to view the sizes in MBs or human-readable format.

Let’s run this command with the -h option on server2:

[root@server2 ~]# df -h
Filesystem             Size  Used Avail Use% Mounted on
devtmpfs               4.0M     0  4.0M   0% /dev
tmpfs                  888M     0  888M   0% /dev/shm
tmpfs                  356M  5.1M  351M   2% /run
/dev/mapper/rhel-root   17G  2.0G   15G  12% /
tmpfs                  178M     0  178M   0% /run/user/0
/dev/sda1              960M  344M  617M  36% /boot

Column 1:

  • file system device file or type

Columns 2, 3, 4, 5, 6

  • total, used, and available spaces in and the usage percentage and mount point

Useful flags

-T

  • Add the file system type to the output (example: df -hT)

-x

  • Exclude the specified file system type from the output (example: df -hx tmpfs)

-t

  • Limit the output to a specific file system type (example: df -t xfs)

-i

  • show inode information (example: df -hi)

Calculating Disk Usage

du command

  • reports the amount of space a file or directory occupies.
  • -m or -h option to view the output in MBs or human-readable format. In addition, you can
  • view a usage summary with the -s switch and a grand total with -c.

Run this command on the /usr/bin directory to view the usage summary:

[root@server2 ~]# du -sh /usr/bin
151M	/usr/bin

Add a “total” row to the output and with numbers displayed in KBs:

[root@server2 ~]# du -sc /usr/bin
154444	/usr/bin
154444	total
[root@server2 ~]# du -sch /usr/bin
151M	/usr/bin
151M	total

Try this command with different options on the /usr/sbin/lvm file and observe the results.

Swap and its Management

  • Move pages of idle data between physical memory and swap.

  • Swap areas act as extensions to the physical memory.

  • May be activated or deactivated independent of swap spaces located in other partitions and volumes.

  • The system splits the physical memory into small logical chunks called pages and maps their physical locations to virtual locations on the swap to facilitate access by system processors.

  • This physical-to-virtual mapping of pages is stored in a data structure called page table, and it is maintained by the kernel.

  • When a program or process is spawned, it requires space in the physical memory to run and be processed.

  • Although many programs can run concurrently, the physical memory cannot hold all of them at once.

  • The kernel monitors the memory usage.

  • As long as the free memory remains above a high threshold, nothing happens.

  • When the free memory falls below that threshold, the system starts moving selected idle pages of data from physical memory to the swap space to make room to accommodate other programs.

  • This piece in the process is referred to as page out.

  • Since the system CPU performs the process execution in around-robin fashion, when the system needs this paged-out data for execution, the CPU looks for that data in the physical memory and a pagefault occurs, resulting in moving the pages back to the physical memory from the swap.

  • This return of data to the physical memory is referred to as page in.

  • The entire process of paging data out and in is known as demand paging.

  • RHEL systems with less physical memory but high memory requirements can become over busy with paging out and in.

  • When this happens, they do not have enough cycles to carry out other useful tasks, resulting in degraded system performance.

  • The excessive amount of paging that affects the system performance is called thrashing.

  • When thrashing begins, or when the free physical memory falls below a low threshold, the system deactivates idle processes and prevents new processes from being launched.

  • The idle processes are only reactivated, and new processes are only allowed to be started when the system discovers that the available physical memory has climbed above the threshold level and thrashing has ceased.

Determining Current Swap Usage

  • Size of a swap area should not be less than the amount of physical memory.
  • Depending on workload requirements, it may be twice the size or larger.
  • It is also not uncommon to see systems with less swap than the actual amount of physical memory.
  • This is especially witnessed on systems with a huge physical memory size.

free command

  • View memory and swap space utilization.
  • view how much physical memory is installed (total), used (used), available (free), used by shared library routines (shared), holding data before it is written to disk (buffers), and used to store frequently accessed data (cached) on the system. The
  • -h
    • list the values in human-readable format,
  • -k
    • for KB,
  • -m
    • for MB,
  • -g
    • for GB,
  • -t
    • display a line with the “total” at the bottom of the output.
[root@server2 mapper]# free -ht
               total        used        free      shared  buff/cache   available
Mem:           1.7Gi       783Mi       714Mi       5.0Mi       440Mi       991Mi
Swap:          2.0Gi          0B       2.0Gi
Total:         3.7Gi       783Mi       2.7Gi

Try free -hts 3 and free -htc 2 to refresh the output every three seconds (-s) and to display the output twice (-c).

  • Reads memory and swap information from the /proc/meminfo file to produce the report. The values are shown in KBs by default, and they are slightly off from what is shown above with free. Here are the relevant fields from this file:
[root@server2 mapper]# cat /proc/meminfo | grep -E 'Mem|Swap'
MemTotal:        1818080 kB
MemFree:          731724 kB
MemAvailable:    1015336 kB
SwapCached:            0 kB
SwapTotal:       2097148 kB
SwapFree:        2097148 kB

Prioritizing Swap Spaces

  • You may find multiple swap areas configured and activated to meet the workload demand.
  • The default behavior of RHEL is to use the first activated swap area and move on to the next when the first one is exhausted.
  • The system allows us to prioritize one area over the other by adding the option “pri” to the swap entries in the fstab file.
  • This flag supports a value between -2 and 32767 with -2 being the default.
  • A higher value of “pri” sets a higher priority for the corresponding swap region.
  • For swap areas with an identical priority, the system alternates between them.

Swap Administration Commands

  • In order to create and manage swap spaces on the system, the mkswap, swapon, and swapoff commands are available.
  • Use mkswap to initialize a partition for use as a swap space.
  • Once the swap area is ready, you can activate or deactivate it from the command line with the help of the other two commands,
  • Can also set it up for automatic activation by placing an entry in the fstab file.
  • The fstab file accepts the swap area’s device file, UUID, or label.

Lab: Create and Activate Swap in Partition and Logical Volume (server 2)

  • Create one swap area in a new 40MB partition called sdb3 using the mkswap command.
  • Create another swap area in a 140MB logical volume called swapvol in vgfs.
  • Add their entries to the /etc/fstab file for persistence.
  • Use the UUID and priority 1 for the partition swap and the device file and priority 2 for the logical volume swap.
  • Activate them and use appropriate tools to validate the activation.

EXAM TIP: Use the lsblk command to determine available disk space.

1. Use parted print on the sdb disk and the vgs command on the vgfs volume group to determine available space for a new 40MB partition and a 144MB logical volume:

[root@server2 mapper]# sudo parted /dev/sdb print
Model: ATA VBOX HARDDISK (scsi)
Disk /dev/sdb: 262MB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags: 

Number  Start   End    Size    Type     File system  Flags
 1      1049kB  101MB  99.6MB  primary  ext4
 2      102MB   201MB  99.6MB  primary  fat16

[root@server2 mapper]# sudo vgs vgfs
  Devices file sys_wwid t10.ATA_VBOX_HARDDISK_VBa5e3cbf7-10921e08 PVID qeP9dCevNnTy422I8p18NxDKQ2WyDodU last seen on /dev/sdf1 not found.
  VG   #PV #LV #SN Attr   VSize   VFree  
  vgfs   2   2   0 wz--n- 400.00m 144.00m

The outputs show 49MB (250MB minus 201MB) free space on the sdb disk and 144MB free space in the volume group.

2. Create a partition called sdb3 of size 40MB using the parted command:

[root@server2 mapper]# sudo parted /dev/sdb mkpart primary 202 242
Information: You may need to update /etc/fstab.

3. Create logical volume swapvol of size 144MB in vgs using the lvcreate command:

[root@server2 mapper]# sudo lvcreate -L 144 -n swapvol vgfs               
  Logical volume "swapvol" created.

4. Construct swap structures in sdb3 and swapvol using the mkswap command:

[root@server2 mapper]# sudo mkswap /dev/sdb3
Setting up swapspace version 1, size = 38 MiB (39841792 bytes)
no label, UUID=a796e0df-b1c3-4c30-bdde-dd522bba4fff

[root@server2 mapper]# sudo mkswap /dev/vgfs/swapvol
Setting up swapspace version 1, size = 144 MiB (150990848 bytes)
no label, UUID=88196e73-feaf-4137-8743-f9340296aeec

5. Edit the fstab file and add entries for both swap areas for auto-activation on reboots. Obtain the UUID for partition swap with lsblk -f /dev/sdb3 and use the device file for logical volume. Specify their priorities.

UUID=a796e0df-b1c3-4c30-bdde-dd522bba4fff swap swap pri=1 0 0
/dev/vgfs/swapvol swap swap pri=2 0 0   

EXAM TIP: You will not be given any credit for this work if you forget to add entries to the fstab file.

6. Determine the current amount of swap space on the system using the swapon command:

[root@server2]# sudo swapon
NAME      TYPE      SIZE USED PRIO
/dev/dm-1 partition   2G   0B   -2

There is one 2GB swap area on the system and it is configured at the default priority of -2.

7. Activate the new swap regions using the swapon command:

[root@server2]# sudo swapon -a

8. Confirm the activation using the swapon command or by viewing the /proc/swaps file:

[root@server2 mapper]# sudo swapon
NAME      TYPE      SIZE USED PRIO
/dev/dm-1 partition   2G   0B   -2
/dev/sdb3 partition  38M   0B    1
/dev/dm-7 partition 144M   0B    2
[root@server2 mapper]# cat /proc/swaps
Filename				Type		Size		Used		Priority
/dev/dm-1                               partition	2097148		0		-2
/dev/sdb3                               partition	38908		0		1
/dev/dm-7                               partition	147452		0		2
#dm is device mapper

9. Issue the free command to view the reflection of swap numbers on the Swap and Total lines:

[root@server2 mapper]# free -ht
               total        used        free      shared  buff/cache   available
Mem:           1.7Gi       793Mi       706Mi       5.0Mi       438Mi       981Mi
Swap:          2.2Gi          0B       2.2Gi
Total:         3.9Gi       793Mi       2.9Gi

Local Filesystems and Swap DIY Labs

Lab: Create VFAT, Ext4, and XFS File Systems in Partitions and Mount Persistently

  • Create three 70MB primary partitions on one of the available 250MB disks (lsblk) by invoking the parted utility directly at the command prompt.
[root@server2 mapper]# parted /dev/sdc mklabel msdos
Information: You may need to update /etc/fstab.

[root@server2 mapper]# parted /dev/sdc mkpart primary 1 70m
Information: You may need to update /etc/fstab.

root@server2 mapper]# parted /dev/sdb print
Model: ATA VBOX HARDDISK (scsi)
Disk /dev/sdb: 262MB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags: 

Number  Start   End     Size    Type     File system  Flags
 1      1049kB  70.3MB  69.2MB  primary
parted) mkpart primary 71MB 140MB                                    
Warning: The resulting partition is not properly aligned for best performance: 138671s % 2048s != 0s
Ignore/Cancel?                                                            
Ignore/Cancel? ignore                                                     
(parted) mkpart primary 140MB 210MB
Warning: The resulting partition is not properly aligned for best performance: 273438s % 2048s != 0s
Ignore/Cancel? ignore                                                     
(parted) print                                                            
Model: ATA VBOX HARDDISK (scsi)
Disk /dev/sdb: 262MB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags: 

Number  Start   End     Size    Type     File system  Flags
 1      1049kB  70.3MB  69.2MB  primary
 2      71.0MB  140MB   69.0MB  primary
 3      140MB   210MB   70.0MB  primary
  • Apply label “msdos” if the disk is new.
  • Initialize partition 1 with VFAT, partition 2 with Ext4, and partition 3 with XFS file system types.
[root@server2 mapper]# sudo mkfs -t vfat /dev/sdc1
mkfs.fat 4.2 (2021-01-31)

[root@server2 mapper]# sudo mkfs -t ext4 /dev/sdc2
mke2fs 1.46.5 (30-Dec-2021)
Creating filesystem with 67380 1k blocks and 16848 inodes
Filesystem UUID: 43b590ff-3330-4b88-aef9-c3a97d8cf51e
Superblock backups stored on blocks: 
	8193, 24577, 40961, 57345

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done

[root@server2 mapper]# sudo mkfs -t xfs /dev/sdc3
Filesystem should be larger than 300MB.
Log size should be at least 64MB.
Support for filesystems like this one is deprecated and they will not be supported in future releases.
meta-data=/dev/sdb3              isize=512    agcount=4, agsize=4273 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1    bigtime=1 inobtcount=1 nrext64=0
data     =                       bsize=4096   blocks=17089, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=1368, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
  • Create mount points /vfatfs5, /ext4fs5, and /xfsfs5, and mount all three manually.
[root@server2 mapper]# mkdir /vfatfs5 /ext4fs5 /xfsfs5

[root@server2 mapper]# mount /dev/sdc1 /vfatfs5
mount: (hint) your fstab has been modified, but systemd still uses
       the old version; use 'systemctl daemon-reload' to reload.

[root@server2 mapper]# mount /dev/sdc2 /ext4fs5
mount: (hint) your fstab has been modified, but systemd still uses
       the old version; use 'systemctl daemon-reload' to reload.

[root@server2 mapper]# mount /dev/sdc3 /xfsfs5
mount: (hint) your fstab has been modified, but systemd still uses
       the old version; use 'systemctl daemon-reload' to reload.

[root@server2 mapper]# mount
/dev/sdb1 on /vfatfs5 type vfat (rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=ascii,shortname=mixed,errors=remount-ro)
/dev/sdb2 on /ext4fs5 type ext4 (rw,relatime,seclabel)
/dev/sdb3 on /xfsfs5 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota)
  • Determine the UUIDs for the three file systems, and add them to the fstab file.
[root@server2 mapper]# blkid /dev/sdc1 /dev/sdc2 /dev/sdc3 >> /etc/fstab

[root@server2 mapper]# vim /etc/fstab
  • Unmount all three file systems manually, and execute mount -a to mount them all. umount /dev/sdb1 /dev/sdb2 /dev/sdb3
  • Run df -h for verification.

Lab: Create XFS File System in LVM VDO Volume and Mount Persistently

  • Ensure that VDO software is installed. sudo dnf install kmod-kvdo

  • Create a volume vdo5 with a logical size 20GB on a 5GB disk (lsblk) using the lvcreate command.

[root@server2 ~]# sudo lvcreate -n vdo5 -l 1279 -V 20G --type vdo vgvdo1
WARNING: vdo signature detected on /dev/vgvdo1/vpool0 at offset 0. Wipe it? [y/n]: y
  Wiping vdo signature on /dev/vgvdo1/vpool0.
    The VDO volume can address 2 GB in 1 data slab.
    It can grow to address at most 16 TB of physical storage in 8192 slabs.
    If a larger maximum size might be needed, use bigger slabs.
  Logical volume "vdo5" created.
  • Initialize the volume with XFS file system type.
[root@server2 mapper]# sudo mkfs.xfs /dev/mapper/vgvdo1-vdo5
meta-data=/dev/mapper/vgvdo1-vdo5 isize=512    agcount=4, agsize=1310720 blks
         =                       sectsz=4096  attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1    bigtime=1 inobtcount=1 nrext64=0
data     =                       bsize=4096   blocks=5242880, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=16384, version=2
         =                       sectsz=4096  sunit=1 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
Discarding blocks...Done.
  • Create mount point /vdofs5, and mount it manually.
[root@server2 mapper]# mkdir /vdofs5
[root@server2 mapper]#mount /dev/mapper/vgvdo1-vdo5 /vdofs5)/etc/fstab
[root@server2 mapper]# umount /dev/mapper/vgvdo1-vdo5
  • Unmount the file system manually and execute mount -a to mount it back.
[root@server2 mapper]# blkid /dev/mapper/vgvdo1-vdo5 >> /etc/fstab
[root@server2 mapper]# vim /etc/fstab
  • Run df -h to confirm.

Lab: Create Ext4 and XFS File Systems in LVM Volumes and Mount Persistently

  • Initialize an available 250MB disk for use in LVM (lsblk).
[root@server2 mapper]# parted /dev/sdc mklabel msdos
Warning: The existing disk label on /dev/sdc will be destroyed and all data on
this disk will be lost. Do you want to continue?
Yes/No? y                                                                 
Information: You may need to update /etc/fstab.

[root@server2 mapper]# parted /dev/sdc mkpart primary 1 100%
Information: You may need to update /etc/fstab.
  • Create volume group vg with PE size 8MB and add the physical volume.
[root@server2 ~]# sudo pvcreate /dev/sdc1
  Devices file /dev/sdc is excluded: device is partitioned.
  WARNING: adding device /dev/sdc1 with idname t10.ATA_VBOX_HARDDISK_VB6894bac4-590d5546 which is already used for /dev/sdc.
  Physical volume "/dev/sdc1" successfully created.
  
[root@server2 ~]# vgcreate -s 8 vg /dev/sdc1
  Devices file /dev/sdc is excluded: device is partitioned.
  WARNING: adding device /dev/sdc1 with idname t10.ATA_VBOX_HARDDISK_VB6894bac4-590d5546 which is already used for /dev/sdc.
  Volume group "vg" successfully created
  • Create two logical volumes lv200 and lv300 of sizes 120MB and 100MB.
[root@server2 ~]# lvcreate -n lv200 -L 120 vg
  Devices file /dev/sdc is excluded: device is partitioned.
  Logical volume "lv200" created.
  
[root@server2 ~]# lvcreate -n lv300 -L 100 vg
  Rounding up size to full physical extent 104.00 MiB
  Logical volume "lv300" created.
  • Use the vgs, pvs, lvs, and vgdisplay commands for verification.
  • Initialize the volumes with Ext4 and XFS file system types.
[root@server2 ~]# mkfs.ext4 /dev/vg/lv200
mke2fs 1.46.5 (30-Dec-2021)
Creating filesystem with 122880 1k blocks and 30720 inodes
Filesystem UUID: 52eac2ee-b5bd-4025-9e40-356b38d21996
Superblock backups stored on blocks: 
	8193, 24577, 40961, 57345, 73729

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done 

[root@server2 ~]# mkfs.xfs /dev/vg/lv300
Filesystem should be larger than 300MB.
Log size should be at least 64MB.
Support for filesystems like this one is deprecated and they will not be supported in future releases.
meta-data=/dev/vg/lv300          isize=512    agcount=4, agsize=6656 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1    bigtime=1 inobtcount=1 nrext64=0
data     =                       bsize=4096   blocks=26624, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=1368, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
  • Create mount points /lvmfs5 and /lvmfs6, and mount them manually.
[root@server2 ~]# mkdir /lvmfs5 /lvmfs6
[root@server2 ~]# mount /dev/vg/lv200 /lvmfs5
mount: (hint) your fstab has been modified, but systemd still uses
       the old version; use 'systemctl daemon-reload' to reload.
[root@server2 ~]# mount /dev/vg/lv300 /lvmfs6
mount: (hint) your fstab has been modified, but systemd still uses
       the old version; use 'systemctl daemon-reload' to reload.
  • Add the file system information to the fstab file using their device files.
[root@server2 ~]# blkid /dev/vg/lv200 >> /etc/fstab
[root@server2 ~]# blkid /dev/vg/lv300 >> /etc/fstab
[root@server2 ~]# vim /etc/fstab
  • Unmount the file systems manually, and execute mount -a to mount them back. Run df -h to confirm.
[root@server2 ~]# umount /dev/vg/lv200 /dev/vg/lv300
[root@server2 ~]# mount -a

Lab 14-4: Extend Ext4 and XFS File Systems in LVM Volumes

  • initialize an available 250MB disk for use in LVM (lsblk).
[root@server2 ~]# pvcreate /dev/sdb
  Devices file /dev/sdc is excluded: device is partitioned.
WARNING: dos signature detected on /dev/sdb at offset 510. Wipe it? [y/n]: y
  Wiping dos signature on /dev/sdb.
  WARNING: adding device /dev/sdb with idname t10.ATA_VBOX_HARDDISK_VB428913dd-446a194f which is already used for missing device.
  Physical volume "/dev/sdb" successfully created.
  • Add the new physical volume to volume group vg200.
[root@server2 ~]# vgextend vg /dev/sdb
  Devices file /dev/sdc is excluded: device is partitioned.
  WARNING: adding device /dev/sdb with idname t10.ATA_VBOX_HARDDISK_VB428913dd-446a194f which is already used for missing device.
  Volume group "vg" successfully extended
  • Expand logical volumes lv200 and lv300 along with the underlying file systems to 200MB and 250MB.
[root@server2 ~]# lvextend -L 200m /dev/vg/lv200
  Size of logical volume vg/lv200 changed from 120.00 MiB (15 extents) to 200.00 MiB (25 extents).
  Logical volume vg/lv200 successfully resized.
[root@server2 ~]# lvextend -L 250m /dev/vg/lv200
  Rounding size to boundary between physical extents: 256.00 MiB.
  Size of logical volume vg/lv200 changed from 200.00 MiB (25 extents) to 256.00 MiB (32 extents).
  Logical volume vg/lv200 successfully resized.
  • Use the vgs, pvs, lvs, vgdisplay, and df commands for verification.

Lab 14-5: Create Swap in Partition and LVM Volume and Activate Persistently

  • Create two 100MB partitions on an available 250MB disk (lsblk) by invoking the parted utility directly at the command prompt.
  • Apply label “msdos” if the disk is new.
[root@localhost ~]# parted /dev/sdd mklabel msdos
Information: You may need to update /etc/fstab.

[root@localhost ~]# parted /dev/sdd mkpart primary 1 100MB
Information: You may need to update /etc/fstab.

[root@localhost ~]# parted /dev/sdd mkpart primary 101 201
Information: You may need to update /etc/fstab.
  • Initialize one of the partitions with swap structures.
[root@localhost ~]# sudo mkswap /dev/sdd1
Setting up swapspace version 1, size = 94 MiB (98562048 bytes)
no label, UUID=40eea6c2-b80c-4b25-ad76-611071db52d5
  • Apply label swappart to the swap partition, and add it to the fstab file.
[root@localhost ~]# swaplabel -L swappart /dev/sdd1
[root@localhost ~]# blkid /dev/sdd1 >> /etc/fstab
[root@localhost ~]# vim /etc/fstab
UUID="40eea6c2-b80c-4b25-ad76-611071db52d5" swap swap pri=1 0 0
  • Execute swapon -a to activate it.

  • Run swapon -s to confirm activation.

  • Initialize the other partition for use in LVM.

[root@localhost ~]# pvcreate /dev/sdd2
  Physical volume "/dev/sdd2" successfully created.
  • Expand volume group vg (Lab 14-3) by adding this physical volume to it.
[root@localhost ~]# vgextend vg /dev/sdd2
  Volume group "vg200" successfully extended
  • Create logical volume swapvol of size 180MB.
[root@localhost ~]# lvcreate -L 180 -n swapvol vg
  Logical volume "swapvol" created.
  • Use the vgs, pvs, lvs, and vgdisplay commands for verification.
  • Initialize the logical volume for swap.
[root@localhost vg200]# mkswap /dev/vg/swapvol
Setting up swapspace version 1, size = 180 MiB (188739584 bytes)
no label, UUID=a4b939d0-4b53-4e73-bee5-4c402aff6f9b
  • Add an entry to the fstab file for the new swap area using its device file.
[root@localhost vg200]# vim /etc/fstab
/dev/vg200/swapvol swap swap pri=2 0 0
  • Execute swapon -a to activate it.
  • Run swapon -s to confirm activation.

Network File System (NFS)

NFS Basics and Configuration

Same tools for mounting and unmounting a filesystem.

  • Mounted and accessed the same way as local filesystems.
  • Network protocol that allows file sharing over the network.
  • Multi-platform
  • Multiple clients can access a single share at the same time.
  • Reduced overhead and storage cost.
  • Give users access to uniform data.
  • Consolidate scattered user home directories.
  • May cause client to hang if share is not accessible.
  • Share stays mounted until manually unmounted or the client shuts down.
  • Does not support wildcard characters or environment variables.

NFS Supported versions

  • RHEL 9 Supports versions 3,4.0,4.1, and 4.2 (default)
  • NFSv3 supports:
    • TCP and UDP.
    • asynchronous writes.
    • 64-bit files sizes.
    • Access files larger than 2GB.
  • NFSv4.x supports:
    • All features of NFSv3.
    • Transit firewalls and work on internet.
    • Enhanced security and support for encrypted transfers and ACLs.
    • Better scalability
    • Better cross-platform
    • Better system crash handling
    • Use usernames and group names rather than UID and GID.
    • Uses TCP by default.
    • Can use UDP for backwards compatibility.
    • Version 4.2 only supports TCP

Network File System service

  • Export shares to mount on remote clients
  • Exporting
    • When the NFS server makes shares available.
  • Mounting
    • When a client mounts an exported share locally.
    • Mount point should be empty before trying to mount a share on it.
  • System can be both client and server.
  • Entire directory tree of the share is shared.
  • Cannot re-share a subdirectory of a share.
  • A mounted share cannot be exported from the client.
  • A single exported share is mounted on a directory mount point.
  • Make sure to update the fstab file on the client.

NFS Server and Client Configuration

How to export a share

  • Add entry of the share to /etc/exports using exportfs command
  • Add firewall rule to allow access

Mount a share from the client side

  • Use mount and add the filesystem to the fstab file.

Lab: Export Share on NFS Server

  1. Install nfs-utils
 sudo dnf -y install nfs-utils
  1. Create /common
 sudo mkdir /common
  1. Add full permissions
 sudo chmod 777 /common
  1. Add NFS service persistently to the firewalld configuration to allow NFS traffic and load the new rule:
sudo firewall-cmd --permanent --add-service nfs
sudo firewall-cmd --reload
  1. Start the NFS service and enable it to autostart at system reboots:
sudo systemctl --now enable nfs-server
  1. Verify Operational Status of the NFS services:
sudo systemctl status nfs-server
  1. Open /etc/exports and add entry for /common to export it to server10 with read/write:
/common server10(rw)
  1. Export the entry defined in /etc/exports/. -a option exports all entries in the file. -v is verbose.
sudo exportfs -av
  1. Unexport the share (-u):
sudo exportfs -u server10:/common
  1. Re-export the share:
sudo exportfs -av

LAB: Mount share on NFS client

  1. Install nfs-utils
sudo dnf -y install nfs-utils
  1. Create /local mount point
sudo mkdir /local
  1. Mount the share manually:
sudo mount server20:/common /local
  1. Confirm using mount: (shows nfs version)
mount | grep local
  1. Confirm using df:
df -h | grep local
  1. Add to /fstab for persistence:
server20:/common /local nfs _netdev 0 0

Note:

_netdev option makes system wait for networking to come up before trying to mount the share. 
  1. Unmount share manually using umount then remount to validate accuracy of the entry in /fstab:
sudo umount /local
sudo mount -a
  1. Verify:
df -h
  1. Create a file in /local/ and verify:
touch /local/nfsfile
ls -l /local
  1. Confirm the sync on server 2
ls -l /common/
  1. Update fstab

Partitioning, MBR, and GPT

Partition Information (MBR and GPT)

  • Partition information is stored on the disk in a small region.
  • Read by the operating system at boot time.
  • Master Boot Record (MBR) on the BIOS-based systems
  • GUID Partition Table (GPT) on the UEFI-based systems.
  • At system boot, the BIOS/UEFI:
    • scans all storage devices,
    • detects the presence of MBR/GPT areas,
    • identifies the boot disks,
    • loads the bootloader program in memory from the default boot disk,
    • executes the boot code to read the partition table and identify the /boot partition,
    • loads the kernel in memory, and passes control over to it.
  • MBR and GPT store disk partition information and the boot code.

Master Boot Record (MBR)

  • Resides on the first sector of the boot disk.

  • was the preferred choice for saving partition table information on x86-based computers.

  • with the arrival of bigger and larger hard drives, a new firmware specification (UEFI) was introduced.

  • still widely used, but its use is diminishing in favor of UEFI.

  • allows the creation of three types of partition on a single disk.

  • primary, extended, and logical

  • only primary and logical can be used for data storage

  • extended is a mere enclosure for holding the logical partitions and it is not meant for data storage.

  • supports the creation of up to four primary partitions numbered 1 through 4 at a time.

  • In case additional partitions are required, one of the primary partitions must be deleted and replaced with an extended partition to be able to add logical partitions (up to 11) within that extended partition.

  • Numbering for logical partitions begins at 5.

  • supports a maximum of 14 usable partitions (3 primary and 11 logical) on a single disk.

  • Cannot address storage space beyond 2TB due to its 32-bit nature and its 512-byte disk sector size.

  • non-redundant; the record it contains is not replicated, resulting in an unbootable system in the event of corruption.

  • If your disk is smaller than 2TB and you don’t intend to build more than 14 usable partitions, you can use MBR without issues.

GUID Partition Table (GPT)

  • ability to construct up to 128 partitions (no concept of extended or logical partitions)
  • utilize disks larger than 2TB
  • use 4KB sector size
  • store a copy of the partition information before the end of the disk for redundancy
  • allows a BIOS-based system to boot from a GPT disk using the bootloader program stored in a protective MBR at the first disk sector
  • UEFI firmware also supports the secure boot feature, which only allows signed binaries to boot

MBR Storage Management with parted

parted (partition editor)

  • can be used to partition disks
  • run interactively or directly from the command prompt.
  • understands and supports both MBR and GPT schemes
  • can be used to create up to 128 partitions on a single GPT disk
  • viewing, labeling, adding, naming, and deleting partitions.

print
Displays the partition table that includes disk geometry and partition number, start and end, size, type, file system type, and relevant flags.

mklabel
Applies a label to the disk. Common labels are gpt and msdos.

mkpart
Makes a new partition

name
Assigns a name to a partition

rm
Removes the specified partition

  • use the print subcommand to ensure you created what you wanted.
  • /proc/partitions file is also updated to reflect the results of partition management operations.

Lab: Create an MBR Partition (server2)

  • Assign partition type “msdos” to /dev/sdb for using it as an MBR disk
  • create and confirm a 100MB primary partition on the disk.

1. Execute parted on /dev/sdb to view the current partition information:

[root@server2 ~]# sudo parted /dev/sdb print
Error: /dev/sdb: unrecognised disk label
Model: ATA VBOX HARDDISK (scsi)                                           
Disk /dev/sdb: 262MB
Sector size (logical/physical): 512B/512B
Partition Table: unknown
Disk Flags: 

There is an error on line 1 of the output, indicating an unrecognized label. disk must be labeled before it can be partitioned.

2. Assign disk label “msdos” to the disk with mklabel. This operation is performed only once on a disk.

[root@server2 ~]# sudo parted /dev/sdb mklabel msdos
Information: You may need to update /etc/fstab.
[root@server2 ~]# sudo parted /dev/sdb print                              
Model: ATA VBOX HARDDISK (scsi)
Disk /dev/sdb: 262MB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags: 

Number  Start  End  Size  Type  File system  Flags

To use the GPT partition table type, run “sudo parted /dev/sdb mklabel gpt” instead.

3. Create a 100MB primary partition starting at 1MB (beginning of the disk) using mkpart:

[root@server2 ~]# sudo parted /dev/sdb mkpart primary 1 101m
Information: You may need to update /etc/fstab.

4. Verify the new partition with print:

[root@server2 ~]# sudo parted /dev/sdb print                              
Model: ATA VBOX HARDDISK (scsi)
Disk /dev/sdb: 262MB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags: 

Number  Start   End    Size    Type     File system  Flags
 1      1049kB  101MB  99.6MB  primary

Partition numbering begins at 1 by default.

5. Confirm the new partition with the lsblk command:

[root@server2 ~]# lsblk /dev/sdb
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
sdb      8:16   0  250M  0 disk 
└─sdb1   8:17   0   95M  0 part 

The device file for the first partition on the sdb disk is sdb1 as identified on the bottom line. The partition size is 95MB.

Different tools will have variance in reporting partition sizes. ignore minor differences.

6. Check the /proc/partitions file also:

[root@server2 ~]# cat /proc/partitions | grep sdb
   8       16     256000 sdb
   8       17      97280 sdb1

Exercise 13-3: Delete an MBR Partition (server2)

delete the sdb1 partition that was created in Exercise 13-2 confirm the deletion.

1. Execute parted on /dev/sdb with the rm subcommand to remove partition number 1:

[root@server2 ~]# sudo parted /dev/sdb rm 1
Information: You may need to update /etc/fstab.

2. Confirm the partition deletion with print:

[root@server2 ~]# sudo parted /dev/sdb print                              
Model: ATA VBOX HARDDISK (scsi)
Disk /dev/sdb: 262MB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags: 

Number  Start  End  Size  Type  File system  Flags

3. Check the /proc/partitions file:

[root@server2 ~]# cat /proc/partitions | grep sdb
   8       16     256000 sdb

can also run the lsblk command for further verification. T

EXAM TIP: Knowing either parted or gdisk for the exam is enough.

GPT Storage Management with gdisk

gdisk (GPT disk) Command

  • partitions disks using the GPT format.

  • text-based, menu-driven program

  • show, add, verify, modify, and delete partitions

  • can create up to 128 partitions on a single disk on systems with UEFI firmware.

  • Main interface of gdisk can be invoked by specifying a disk device name such as /dev/sdc with the command. Type help or ? (question mark) at the prompt to view available subcommands.

[root@server2 ~]# sudo gdisk /dev/sdc
GPT fdisk (gdisk) version 1.0.7

Partition table scan:
  MBR: not present
  BSD: not present
  APM: not present
  GPT: not present

Creating new GPT entries in memory.

Command (? for help): ?
b	back up GPT data to a file
c	change a partition's name
d	delete a partition
i	show detailed information on a partition
l	list known partition types
n	add a new partition
o	create a new empty GUID partition table (GPT)
p	print the partition table
q	quit without saving changes
r	recovery and transformation options (experts only)
s	sort partitions
t	change a partition's type code
v	verify disk
w	write table to disk and exit
x	extra functionality (experts only)
?	print this menu

Command (? for help): 

Exercise 13-4: Create a GPT Partition (server2)

  • Assign partition type “gpt” to /dev/sdc for using it as a GPT disk.
  • create and confirm a 200MB partition on the disk.

1. Execute gdisk on /dev/sdc to view the current partition information:

[root@server2 ~]# sudo gdisk /dev/sdc
GPT fdisk (gdisk) version 1.0.7

Partition table scan:
  MBR: not present
  BSD: not present
  APM: not present
  GPT: not present

Creating new GPT entries in memory.

Command (? for help):

The disk currently does not have any partition table on it.

2. Assign “gpt” as the partition table type to the disk using the o subcommand. Enter “y” for confirmation to proceed. This operation is performed only once on a disk.

Command (? for help): o
This option deletes all partitions and creates a new protective MBR.
Proceed? (Y/N): y

3. Run the p subcommand to view disk information and confirm the GUID partition table creation:

Command (? for help): p
Disk /dev/sdc: 512000 sectors, 250.0 MiB
Model: VBOX HARDDISK   
Sector size (logical/physical): 512/512 bytes
Disk identifier (GUID): 9446222A-28AC-4F96-816F-518510F95019
Partition table holds up to 128 entries
Main partition table begins at sector 2 and ends at sector 33
First usable sector is 34, last usable sector is 511966
Partitions will be aligned on 2048-sector boundaries
Total free space is 511933 sectors (250.0 MiB)

Number  Start (sector)    End (sector)  Size       Code  Name

The output returns the assigned GUID and states that the partition table can hold up to 128 partition entries.

4. Create the first partition of size 200MB starting at the default sector with default type “Linux filesystem” using the n subcommand:

Command (? for help): n
Partition number (1-128, default 1): 
First sector (34-511966, default = 2048) or {+-}size{KMGTP}: 
Last sector (2048-511966, default = 511966) or {+-}size{KMGTP}: +200M
Current type is 8300 (Linux filesystem)
Hex code or GUID (L to show codes, Enter = 8300): 
Changed type of partition to 'Linux filesystem'

5. Verify the new partition with p:

Command (? for help): p
Disk /dev/sdc: 512000 sectors, 250.0 MiB
Model: VBOX HARDDISK   
Sector size (logical/physical): 512/512 bytes
Disk identifier (GUID): 9446222A-28AC-4F96-816F-518510F95019
Partition table holds up to 128 entries
Main partition table begins at sector 2 and ends at sector 33
First usable sector is 34, last usable sector is 511966
Partitions will be aligned on 2048-sector boundaries
Total free space is 102333 sectors (50.0 MiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048          411647   200.0 MiB   8300  Linux filesystem

6. Run w to write the partition information to the partition table and exit out of the interface. Enter “y” to confirm when prompted.

Command (? for help): w

Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING
PARTITIONS!!

Do you want to proceed? (Y/N): y
OK; writing new GUID partition table (GPT) to /dev/sdc.
The operation has completed successfully.

You may need to run the partprobe command after exiting the gdisk utility to inform the kernel of partition table changes.

7. Verify the new partition by issuing either of the following at the command prompt:

[root@server2 ~]# grep sdc /proc/partitions
   8       32     256000 sdc
   8       33     204800 sdc1
   
[root@server2 ~]# lsblk /dev/sdc
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
sdc      8:32   0  250M  0 disk 
└─sdc1   8:33   0  200M  0 part 

Exercise 13-5: Delete a GPT Partition(server2)

  • Delete the sdc1 partition that was created in Exercise 13-4 and confirm the removal.

1. Execute gdisk on /dev/sdc and run d1 at the utility’s prompt to delete partition number 1:

[root@server2 ~]# gdisk /dev/sdc
GPT fdisk (gdisk) version 1.0.7

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with protective MBR; using GPT.

Command (? for help): d1
Using 1

2. Confirm the partition deletion with p:

Command (? for help): p
Disk /dev/sdc: 512000 sectors, 250.0 MiB
Model: VBOX HARDDISK   
Sector size (logical/physical): 512/512 bytes
Disk identifier (GUID): 9446222A-28AC-4F96-816F-518510F95019
Partition table holds up to 128 entries
Main partition table begins at sector 2 and ends at sector 33
First usable sector is 34, last usable sector is 511966
Partitions will be aligned on 2048-sector boundaries
Total free space is 511933 sectors (250.0 MiB)

Number  Start (sector)    End (sector)  Size       Code  Name

3. Write the updated partition information to the disk with w and quit gdisk:

Command (? for help): w

Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING
PARTITIONS!!

Do you want to proceed? (Y/N): y
OK; writing new GUID partition table (GPT) to /dev/sdc.
The operation has completed successfully.

4. Verify the partition deletion by issuing either of the following at the command prompt:

[root@server2 ~]# grep sdc /proc/partitions
   8       32     256000 sdc
[root@server2 ~]# lsblk /dev/sdc
NAME MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
sdc    8:32   0  250M  0 disk 

Disk Partitions

  • Be careful when adding a new partition to elude data corruption with overlapping an extant partition or wasting storage by leaving unused space between adjacent partitions.
  • Disk allocated at the time of installation is recognized as sda (s for SATA, SAS, or SCSI device) disk a, first partition identified as sda1 and the second partition as sda2.
  • Any subsequent disks added to the system will be known as sdb, sdc, sdd, and so on, and will use 1, 2, 3, etc. for partition numbering.

Use lsblk to list disk and partition information.

[root@server1 ~]# lsblk
NAME          MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
sda             8:0    0   10G  0 disk 
├─sda1          8:1    0    1G  0 part /boot
└─sda2          8:2    0    9G  0 part 
  ├─rhel-root 253:0    0    8G  0 lvm  /
  └─rhel-swap 253:1    0    1G  0 lvm  [SWAP]
sr0            11:0    1  9.8G  0 rom  /mnt

sr0 represents the ISO image mounted as an optical medium:

[root@server1 ~]# sudo fdisk -l
Disk /dev/sda: 10 GiB, 10737418240 bytes, 20971520 sectors
Disk model: VBOX HARDDISK   
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xfc8b3804

Device     Boot   Start      End  Sectors Size Id Type
/dev/sda1  *       2048  2099199  2097152   1G 83 Linux
/dev/sda2       2099200 20971519 18872320   9G 8e Linux LVM


Disk /dev/mapper/rhel-root: 8 GiB, 8585740288 bytes, 16769024 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/rhel-swap: 1 GiB, 1073741824 bytes, 2097152 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

identifiers 83 and 8e are hexadecimal values for the partition types

Storage Management Tools

parted, gdisk, and LVM Partitions created with a combination of most of these tools and toolsets can coexist on the same disk.

parted understands both MBR and GPT formats.

gdisk

  • support the GPT format only
  • may be used as a replacement of parted.

LVM

  • feature-rich logical volume management solution that gives flexibility in storage management.

Remove a filesystem from a partition

Remove a filesystem from a partition

To delete a filesystem, partition, raid and disk labels from the disk. Use wipefs -a /dev/sdb1 May also use wipefs -a /dev/sdb? to delete sub partitions? (I need to verify this)

Make sure the filesystem is unmounted first.

[root@server2 mapper]# lsblk
NAME                    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
sda                       8:0    0   20G  0 disk 
├─sda1                    8:1    0    1G  0 part 
└─sda2                    8:2    0   19G  0 part 
  ├─rhel-root           253:0    0   17G  0 lvm  /
  └─rhel-swap           253:1    0    2G  0 lvm  [SWAP]
sdb                       8:16   0  250M  0 disk 
├─sdb1                    8:17   0   95M  0 part 
├─sdb2                    8:18   0   95M  0 part 
└─sdb3                    8:19   0   38M  0 part [SWAP]
sdc                       8:32   0  250M  0 disk 
sdd                       8:48   0  250M  0 disk 
└─sdd1                    8:49   0  163M  0 part 
  ├─vgfs-ext4vol        253:2    0  128M  0 lvm  
  └─vgfs-xfsvol         253:3    0  128M  0 lvm  
sde                       8:64   0  250M  0 disk 
├─vgfs-ext4vol          253:2    0  128M  0 lvm  
├─vgfs-xfsvol           253:3    0  128M  0 lvm  
└─vgfs-swapvol          253:7    0  144M  0 lvm  [SWAP]
sdf                       8:80   0    5G  0 disk 
└─vgvdo1-vpool0_vdata   253:4    0    5G  0 lvm  
  └─vgvdo1-vpool0-vpool 253:5    0   20G  0 lvm  
    └─vgvdo1-lvvdo      253:6    0   20G  0 lvm  
sr0                      11:0    1  9.8G  0 rom  
[root@server2 mapper]# wipefs -a /dev/sdb1
/dev/sdb1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef

[root@server2 mapper]# wipefs -a /dev/sdb2
/dev/sdb2: 8 bytes were erased at offset 0x00000036 (vfat): 46 41 54 31 36 20 20 20
/dev/sdb2: 1 byte was erased at offset 0x00000000 (vfat): eb
/dev/sdb2: 2 bytes were erased at offset 0x000001fe (vfat): 55 aa

[root@server2 mapper]# wipefs -a /dev/sdb3
wipefs: error: /dev/sdb3: probing initialization failed: Device or resource busy

[root@server2 mapper]# wipefs -a /dev/sdb
wipefs: error: /dev/sdb: probing initialization failed: Device or resource busy

[root@server2 mapper]# swapoff /dev/sdb3

[root@server2 mapper]# wipefs -a /dev/sdb3
/dev/sdb3: 10 bytes were erased at offset 0x00000ff6 (swap): 53 57 41 50 53 50 41 43 45 32

[root@server2 mapper]# wipefs -a /dev/sdb
/dev/sdb: 2 bytes were erased at offset 0x000001fe (dos): 55 aa
/dev/sdb: calling ioctl to re-read partition table: Success

[root@server2 mapper]# lsblk
NAME                    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
sda                       8:0    0   20G  0 disk 
├─sda1                    8:1    0    1G  0 part 
└─sda2                    8:2    0   19G  0 part 
  ├─rhel-root           253:0    0   17G  0 lvm  /
  └─rhel-swap           253:1    0    2G  0 lvm  [SWAP]
sdb                       8:16   0  250M  0 disk 
sdc                       8:32   0  250M  0 disk 
sdd                       8:48   0  250M  0 disk 
└─sdd1                    8:49   0  163M  0 part 
  ├─vgfs-ext4vol        253:2    0  128M  0 lvm  
  └─vgfs-xfsvol         253:3    0  128M  0 lvm  
sde                       8:64   0  250M  0 disk 
├─vgfs-ext4vol          253:2    0  128M  0 lvm  
├─vgfs-xfsvol           253:3    0  128M  0 lvm  
└─vgfs-swapvol          253:7    0  144M  0 lvm  [SWAP]
sdf                       8:80   0    5G  0 disk 
└─vgvdo1-vpool0_vdata   253:4    0    5G  0 lvm  
  └─vgvdo1-vpool0-vpool 253:5    0   20G  0 lvm  
    └─vgvdo1-lvvdo      253:6    0   20G  0 lvm  
sr0                      11:0    1  9.8G  0 rom  

I could not use this on a disk used in an LV. Remove the LVs: lvremove lvvdo vgfs

[root@server2 mapper]# lsblk
NAME           MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
sda              8:0    0   20G  0 disk 
├─sda1           8:1    0    1G  0 part 
└─sda2           8:2    0   19G  0 part 
  ├─rhel-root  253:0    0   17G  0 lvm  /
  └─rhel-swap  253:1    0    2G  0 lvm  [SWAP]
sdb              8:16   0  250M  0 disk 
sdc              8:32   0  250M  0 disk 
sdd              8:48   0  250M  0 disk 
└─sdd1           8:49   0  163M  0 part 
sde              8:64   0  250M  0 disk 
└─vgfs-swapvol 253:7    0  144M  0 lvm  [SWAP]
sdf              8:80   0    5G  0 disk 
sr0             11:0    1  9.8G  0 rom  

Need to remove swapvol from swap:

[root@server2 mapper]# swapoff /dev/mapper/vgfs-swapvol

Remove the LV:

[root@server2 mapper]# lvremove /dev/mapper/vgfs-swapvol
Do you really want to remove active logical volume vgfs/swapvol? [y/n]: y
  Logical volume "swapvol" successfully removed.

Wipe sdd:

[root@server2 mapper]# wipefs -a /dev/sdd
/dev/sdd: 2 bytes were erased at offset 0x000001fe (dos): 55 aa
/dev/sdd: calling ioctl to re-read partition table: Success
[root@server2 mapper]# lsblk
NAME          MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
sda             8:0    0   20G  0 disk 
├─sda1          8:1    0    1G  0 part 
└─sda2          8:2    0   19G  0 part 
  ├─rhel-root 253:0    0   17G  0 lvm  /
  └─rhel-swap 253:1    0    2G  0 lvm  [SWAP]
sdb             8:16   0  250M  0 disk 
sdc             8:32   0  250M  0 disk 
sdd             8:48   0  250M  0 disk 
sde             8:64   0  250M  0 disk 
sdf             8:80   0    5G  0 disk 
sr0            11:0    1  9.8G  0 rom  

Thin Provisioning and LVM

Thin Provisioning

  • Allows for an economical allocation and utilization of storage space by moving arbitrary data blocks to contiguous locations, which results in empty block elimination.
  • Can create a thin pool of storage space and assign volumes much larger storage space than the physical capacity of the pool.
  • Workloads begin consuming the actual allocated space for data writing.
  • When a preset custom threshold (80%, for instance) on the actual consumption of the physical storage in the pool is reached, expand the pool dynamically by adding more physical storage to it.
  • The volumes will automatically start exploiting the new space right away.
  • helps prevent spending more money upfront.

Logical Volume Manager (LVM)

  • Used for managing block storage in Linux.
  • Provides an abstraction layer between the physical storage and the file system
  • Enables the file system to be resized, span across multiple disks, use arbitrary disk space, etc.
  • Accumulates spaces taken from partitions or entire disks (called Physical Volumes) to form a logical container (called Volume Group) which is then divided into logical partitions (called Logical Volumes).
  • online resizing of volume groups and logical volumes,
  • online data migration between logical volumes and between physical volumes
  • user-defined naming for volume groups and logical volumes
  • mirroring and striping across multiple disks
  • snapshotting of logical volumes.

  • Made up of three key objects called physical volume, volume group, and logical volume.
  • These objects are further virtually broken down into Physical Extents (PEs) and Logical Extents (LEs).

Physical Volume(PV)

  • created when a block storage device such as a partition or an entire disk is initialized and brought under LVM control.
  • This process constructs LVM data structures on the device, including a label on the second sector and metadata shortly thereafter.
  • The label includes the UUID, size, and pointers to the locations of data and metadata areas.
  • Given the criticality of metadata, LVM stores a copy of it at the end of the physical volume as well.
  • The rest of the device space is available for use.

You can use an LVM command called pvs (physical volume scan or summary) to scan and list available physical volumes on server2:

[root@server2 ~]# sudo pvs
  PV         VG   Fmt  Attr PSize   PFree
  /dev/sda2  rhel lvm2 a--  <19.00g    0
  • (a for allocatable under Attr)

Try running this command again with the -v flag to view more information about the physical volume.

Volume Group

  • Created when at least one physical volume is added to it.
  • The space from all physical volumes in a volume group is aggregated to form one large pool of storage, which is then used to build logical volumes.
  • Physical volumes added to a volume group may be of varying sizes.
  • LVM writes volume group metadata on each physical volume that is added to it.
  • The volume group metadata contains its name,date, and time of creation, how it was created, the extent size used, a list of physical and logical volumes, a mapping of physical and logical extents, etc.
  • Can have a custom name assigned to it at the time of its creation.
  • A copy of the volume group metadata is stored and maintained at two distinct locations on each physical volume within the volume group.

Use vgs (volume group scan or summary) to scan and list available volume groups on server2:

[root@server2 ~]# sudo vgs
  VG   #PV #LV #SN Attr   VSize   VFree
  rhel   1   2   0 wz--n- <19.00g    0
  • Status of the volume group under the Attr column (w for writeable, z for resizable, and n for normal),

Try running this command again with the -v flag to view more information about the volume group.

Physical Extent

  • A physical volume is divided into several smaller logical pieces when it is added to a volume group.
  • These logical pieces are known as Physical Extents (PE).
  • An extent is the smallest allocatable unit of space in LVM.
  • At the time of volume group creation, you can either define the size of the PE or leave it to the default value of 4MB.
  • This implies that a 20GB physical volume would have approximately 5,000 PEs.
  • Any physical volumes added to this volume group thereafter will use the same PE size.

Use vgdisplay (volume group display) on server2 and grep for ‘PE Size’ to view the PE size used in the rhel volume group:

[root@server2 ~]# sudo vgdisplay rhel | grep 'PE Size'
  PE Size               4.00 MiB

Logical Volume

  • A volume group consists of a pool of storage taken from one or more physical volumes.
  • This volume group space is used to create one or more Logical Volumes (LVs).
  • A logical volume can be created or weeded out online, expanded or shrunk online, and can use space taken from one or multiple physical volumes inside the volume group.

The default naming convention used for logical volumes is lvol0, lvol1, lvol2, and so on you may assign custom names to them.

Use lvs (logical volume scan or summary) to scan and list available logical volumes on server2:

[root@server2 ~]# sudo lvs
  LV   VG   Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root rhel -wi-ao---- <17.00g                                                    
  swap rhel -wi-ao----   2.00g
  • Attr column (w for writeable, i for inherited allocation policy, a for active, and o for open) and their sizes.

Try running this command again with the -v flag to view more information about the logical volumes.

Logical Extent

  • A logical volume is made up of Logical Extents (LE).
  • Logical extents point to physical extents, and they may be random or contiguous.
  • The larger a logical volume is, the more logical extents it will have.
  • Logical extents are a set of physical extents allocated to a logical volume.
  • The LE size is always the same as the PE size in a volume group.
  • The default LE size is 4MB, which corresponds to the default PE size of 4MB.

Use lvdisplay (logical volume display) on server2 to view information about the root logical volume in the rhel volume group.

[root@server30 ~]# lvdisplay /dev/rhel/root
  --- Logical volume ---
  LV Path                /dev/rhel/root
  LV Name                root
  VG Name                rhel
  LV UUID                DhHyeI-VgwM-w75t-vRcC-5irj-AuHC-neryQf
  LV Write Access        read/write
  LV Creation host, time localhost.localdomain, 2024-07-08 17:32:18 -0700
  LV Status              available
  # open                 1
  LV Size                <17.00 GiB
  Current LE             4351
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:0
  • The output does not disclose the LE size; however, you can convert the LV size in MBs (17,000) and then divide the result by the Current LE count (4,351) to get the LE size (which comes close to 4MB).

LVM Operations and Commands

  • Creating and removing a physical volume, volume group, and logical volume
  • Extending and reducing a volume group and logical volume
  • Renaming a volume group and logical volume
  • listing and displaying physical volume, volume group, and logical volume information.

Create and Remove Operations

pvcreate/pvremove

  • Initializes/uninitializes a disk or partition for LVM use

vgcreate/vgremove

  • Creates/removes a volume group

lvcreate/lvremove

  • Creates/removes a logical volume

Extend and Reduce Operations

vgextend/vgreduce

  • Adds/removes a physical volume to/from a volume group

lvextend/lvreduce

  • Extends/reduces the size of a logical volume

lvresize

  • Resizes a logical volume. With the -r option, this command calls the fsadm command to resize the underlying file system as well.

Rename Operations

vgrename

  • Rename a volume group

lvrename

  • Rename a logical volume

List and Display Operations

pvs/pvdisplay

  • Lists/displays physical volume information

vgs/vgdisplay lvs/lvdisplay

  • Lists/displays volume group information Lists/displays logical volume information

  • All the tools accept the -v switch to support verbosity.

Exercise 13-6: Create Physical Volume and Volume Group (server2)

  • initialize one partition sdd1 (90MB) and one disk sde (250MB) for use in LVM.
  • create a volume group called vgbook and add both physical volumes to it use the PE size of 16MB
  • list and display the volume group and the physical volumes.

1. Create a partition of size 90MB on sdd using the parted command and confirm. You need to label the disk first, as it is a new disk.

[root@server2 ~]# sudo parted /dev/sdd mklabel msdos
Information: You may need to update /etc/fstab.

[root@server2 ~]# sudo parted /dev/sdd mkpart primary 1 91m               
Information: You may need to update /etc/fstab.

[root@server2 ~]# sudo parted /dev/sdd print                              
Model: ATA VBOX HARDDISK (scsi)
Disk /dev/sdd: 262MB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags: 

Number  Start   End     Size    Type     File system  Flags
 1      1049kB  91.2MB  90.2MB  primary

2. Initialize the sdd1 partition and the sde disk using the pvcreate command. Note that there is no need to apply a disk label on sde with parted as LVM does not require it.

[root@server2 ~]# sudo pvcreate /dev/sdd1 /dev/sde -v
  Wiping signatures on new PV /dev/sdd1.
  Wiping signatures on new PV /dev/sde.
  Set up physical volume for "/dev/sdd1" with 176128 available sectors.
  Zeroing start of device /dev/sdd1.
  Writing physical volume data to disk "/dev/sdd1".
  Physical volume "/dev/sdd1" successfully created.
  Set up physical volume for "/dev/sde" with 512000 available sectors.
  Zeroing start of device /dev/sde.
  Writing physical volume data to disk "/dev/sde".
  Physical volume "/dev/sde" successfully created.

3. Create vgbook volume group using the vgcreate command and add the two physical volumes to it. Use the -s option to specify the PE size in MBs.

[root@server2 ~]# sudo vgcreate -vs 16 vgbook /dev/sdd1 /dev/sde
  Wiping signatures on new PV /dev/sdd1.
  Wiping signatures on new PV /dev/sde.
  Adding physical volume '/dev/sdd1' to volume group 'vgbook'
  Adding physical volume '/dev/sde' to volume group 'vgbook'
  Creating volume group backup "/etc/lvm/backup/vgbook" (seqno 1).
  Volume group "vgbook" successfully created

4. List the volume group information:

[root@server2 ~]# sudo vgs vgbook
  VG     #PV #LV #SN Attr   VSize   VFree  
  vgbook   2   0   0 wz--n- 320.00m 320.00m

5. Display detailed information about the volume group and the physical volumes it contains:

[root@server2 ~]# sudo vgdisplay -v vgbook
  --- Volume group ---
  VG Name               vgbook
  System ID             
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               320.00 MiB
  PE Size               16.00 MiB
  Total PE              20
  Alloc PE / Size       0 / 0   
  Free  PE / Size       20 / 320.00 MiB
  VG UUID               zRu1d2-ZgDL-bnzV-I9U1-0IFo-uM4x-w4bX0Q
   
  --- Physical volumes ---
  PV Name               /dev/sdd1     
  PV UUID               8x8IgZ-3z5T-ODA8-dofQ-xk5s-QN7I-KwpQ1e
  PV Status             allocatable
  Total PE / Free PE    5 / 5
   
  PV Name               /dev/sde     
  PV UUID               xJU0Hh-W5k9-FyKO-d6Ha-1ofW-ajvh-hJSo8R
  PV Status             allocatable
  Total PE / Free PE    15 / 15

6. List the physical volume information:

[root@server2 ~]# sudo pvs
  PV         VG     Fmt  Attr PSize   PFree  
  /dev/sda2  rhel   lvm2 a--  <19.00g      0 
  /dev/sdd1  vgbook lvm2 a--   80.00m  80.00m
  /dev/sde   vgbook lvm2 a--  240.00m 240.00m

7. Display detailed information about the physical volumes:

[root@server2 ~]# sudo pvdisplay /dev/sdd1
  --- Physical volume ---
  PV Name               /dev/sdd1
  VG Name               vgbook
  PV Size               86.00 MiB / not usable 6.00 MiB
  Allocatable           yes 
  PE Size               16.00 MiB
  Total PE              5
  Free PE               5
  Allocated PE          0
  PV UUID               8x8IgZ-3z5T-ODA8-dofQ-xk5s-QN7I-KwpQ1e
  • Once a partition or disk is initialized and added to a volume group, they are treated identically within the volume group. LVM does not prefer one over the other.

Exercise 13-7: Create Logical Volumes(server2)

  • Create two logical volumes, lvol0 and lvbook1, in the vgbook volume group.
  • Use 120MB for lvol0 and 192MB for lvbook1 from the available pool of space.
  • Display the details of the volume group and the logical volumes.

1. Create a logical volume with the default name lvol0 using the lvcreate command. Use the -L option to specify the logical volume size, 120MB. You may use the -v, -vv, or -vvv option with the command for verbosity.

root@server2 ~]# sudo lvcreate -vL 120 vgbook
  Rounding up size to full physical extent 128.00 MiB
  Creating logical volume lvol0
  Archiving volume group "vgbook" metadata (seqno 1).
  Activating logical volume vgbook/lvol0.
  activation/volume_list configuration setting not defined: Checking only host tags for vgbook/lvol0.
  Creating vgbook-lvol0
  Loading table for vgbook-lvol0 (253:2).
  Resuming vgbook-lvol0 (253:2).
  Wiping known signatures on logical volume vgbook/lvol0.
  Initializing 4.00 KiB of logical volume vgbook/lvol0 with value 0.
  Logical volume "lvol0" created.
  Creating volume group backup "/etc/lvm/backup/vgbook" (seqno 2).
  • Size for the logical volume may be specified in units such as MBs, GBs, TBs, or as a count of LEs

  • MB is the default if no unit is specified

  • The size of a logical volume is always in multiples of the PE size. For instance, logical volumes created in vgbook with the PE size set at 16MB can be 16MB, 32MB, 48MB, 64MB, and so on.

2. Create lvbook1 of size 192MB (16x12) using the lvcreate command. Use the -l switch to specify the size in logical extents and -n for the custom name.

[root@server2 ~]# sudo lvcreate -l 12 -n lvbook1 vgbook
  Logical volume "lvbook1" created.

3. List the logical volume information:

[root@server2 ~]# sudo lvs
  LV      VG     Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root    rhel   -wi-ao---- <17.00g                                                    
  swap    rhel   -wi-ao----   2.00g                                                    
  lvbook1 vgbook -wi-a----- 192.00m                                                    
  lvol0   vgbook -wi-a----- 128.00m 

4. Display detailed information about the volume group including the logical volumes and the physical volumes:

[root@server2 ~]# sudo vgdisplay -v vgbook
  --- Volume group ---
  VG Name               vgbook
  System ID             
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  3
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               0
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               320.00 MiB
  PE Size               16.00 MiB
  Total PE              20
  Alloc PE / Size       20 / 320.00 MiB
  Free  PE / Size       0 / 0   
  VG UUID               zRu1d2-ZgDL-bnzV-I9U1-0IFo-uM4x-w4bX0Q
   
  --- Logical volume ---
  LV Path                /dev/vgbook/lvol0
  LV Name                lvol0
  VG Name                vgbook
  LV UUID                9M9ahf-1L3y-c0yk-3Z2O-UzjH-0Amt-QLi4p5
  LV Write Access        read/write
  LV Creation host, time server2, 2024-06-12 02:42:51 -0700
  LV Status              available
  open                 0
  LV Size                128.00 MiB
  Current LE             8
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:2
   
  --- Logical volume ---
  LV Path                /dev/vgbook/lvbook1
  LV Name                lvbook1
  VG Name                vgbook
  LV UUID                pgd8qR-YXXK-3Idv-qmpW-w8Az-WGLR-g2d8Yn
  LV Write Access        read/write
  LV Creation host, time server2, 2024-06-12 02:45:31 -0700
  LV Status              available
  # open                 0
  LV Size                192.00 MiB
  Current LE             12
  Segments               2
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:3
   
  --- Physical volumes ---
  PV Name               /dev/sdd1     
  PV UUID               8x8IgZ-3z5T-ODA8-dofQ-xk5s-QN7I-KwpQ1e
  PV Status             allocatable
  Total PE / Free PE    5 / 0
   
  PV Name               /dev/sde     
  PV UUID               xJU0Hh-W5k9-FyKO-d6Ha-1ofW-ajvh-hJSo8R
  PV Status             allocatable
  Total PE / Free PE    15 / 0

Alternatively, you can run the following to view only the logical volume details:

[root@server2 ~]# sudo lvdisplay /dev/vgbook/lvol0
  --- Logical volume ---
  LV Path                /dev/vgbook/lvol0
  LV Name                lvol0
  VG Name                vgbook
  LV UUID                9M9ahf-1L3y-c0yk-3Z2O-UzjH-0Amt-QLi4p5
  LV Write Access        read/write
  LV Creation host, time server2, 2024-06-12 02:42:51 -0700
  LV Status              available
  # open                 0
  LV Size                128.00 MiB
  Current LE             8
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:2
[root@server2 ~]# sudo lvdisplay /dev/vgbook/lvbook1
  --- Logical volume ---
  LV Path                /dev/vgbook/lvbook1
  LV Name                lvbook1
  VG Name                vgbook
  LV UUID                pgd8qR-YXXK-3Idv-qmpW-w8Az-WGLR-g2d8Yn
  LV Write Access        read/write
  LV Creation host, time server2, 2024-06-12 02:45:31 -0700
  LV Status              available
  # open                 0
  LV Size                192.00 MiB
  Current LE             12
  Segments               2
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:3

Exercise 13-8: Extend a Volume Group and a Logical Volume(server2)

  • Add another partition sdd2 of size 158MB to vgbook to increase the pool of allocatable space.
  • Initialize the new partition prior to adding it to the volume group.
  • Increase the size of lvbook1 to 336MB.
  • Display basic information for the physical volumes, volume group, and logical volume.

1. Create a partition of size 158MB on sdd using the parted command. Display the new partition to confirm the partition number and size.

[root@server20 ~]# parted /dev/sdd mkpart primary 91 250

[root@server2 ~]# sudo parted /dev/sdd print                              
Model: ATA VBOX HARDDISK (scsi)
Disk /dev/sdd: 262MB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags: 

Number  Start   End     Size    Type     File system  Flags
 1      1049kB  91.2MB  90.2MB  primary
 2      92.3MB  250MB   157MB   primary               lvm

2. Initialize sdd2 using the pvcreate command:

[root@server2 ~]# sudo pvcreate /dev/sdd2
  Physical volume "/dev/sdd2" successfully created.

3. Extend vgbook by adding the new physical volume to it:

[root@server2 ~]# sudo vgextend vgbook /dev/sdd2
  Volume group "vgbook" successfully extended

4. List the volume group:

[root@server2 ~]# sudo vgs
  VG     #PV #LV #SN Attr   VSize   VFree  
  rhel     1   2   0 wz--n- <19.00g      0 
  vgbook   3   2   0 wz--n- 464.00m 144.00m

5. Extend the size of lvbook1 to 340MB by adding 144MB using the lvextend command:

[root@server2 ~]# sudo lvextend -L +144 /dev/vgbook/lvbook1
  Size of logical volume vgbook/lvbook1 changed from 192.00 MiB (12 extents) to 336.00 MiB (21 extents).
  Logical volume vgbook/lvbook1 successfully resized.

EXAM TIP: Make sure the expansion of a logical volume does not affect the file system and the data it contains.

6. Issue vgdisplay on vgbook with the -v switch for the updated details:

[root@server2 ~]# sudo vgdisplay -v vgbook
  --- Volume group ---
  VG Name               vgbook
  System ID             
  Format                lvm2
  Metadata Areas        3
  Metadata Sequence No  5
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               0
  Max PV                0
  Cur PV                3
  Act PV                3
  VG Size               464.00 MiB
  PE Size               16.00 MiB
  Total PE              29
  Alloc PE / Size       29 / 464.00 MiB
  Free  PE / Size       0 / 0   
  VG UUID               zRu1d2-ZgDL-bnzV-I9U1-0IFo-uM4x-w4bX0Q
   
  --- Logical volume ---
  LV Path                /dev/vgbook/lvol0
  LV Name                lvol0
  VG Name                vgbook
  LV UUID                9M9ahf-1L3y-c0yk-3Z2O-UzjH-0Amt-QLi4p5
  LV Write Access        read/write
  LV Creation host, time server2, 2024-06-12 02:42:51 -0700
  LV Status              available
  open                 0
  LV Size                128.00 MiB
  Current LE             8
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:2
   
  --- Logical volume ---
  LV Path                /dev/vgbook/lvbook1
  LV Name                lvbook1
  VG Name                vgbook
  LV UUID                pgd8qR-YXXK-3Idv-qmpW-w8Az-WGLR-g2d8Yn
  LV Write Access        read/write
  LV Creation host, time server2, 2024-06-12 02:45:31 -0700
  LV Status              available
  # open                 0
  LV Size                336.00 MiB
  Current LE             21
  Segments               3
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:3
   
  --- Physical volumes ---
  PV Name               /dev/sdd1     
  PV UUID               8x8IgZ-3z5T-ODA8-dofQ-xk5s-QN7I-KwpQ1e
  PV Status             allocatable
  Total PE / Free PE    5 / 0
   
  PV Name               /dev/sde     
  PV UUID               xJU0Hh-W5k9-FyKO-d6Ha-1ofW-ajvh-hJSo8R
  PV Status             allocatable
  Total PE / Free PE    15 / 0
   
  PV Name               /dev/sdd2     
  PV UUID               1olOnk-o8FH-uJRD-2pJf-8GCy-3K0M-gcf3pF
  PV Status             allocatable
  Total PE / Free PE    9 / 0

7. View a summary of the physical volumes:

root@server2 ~]# sudo pvs
  PV         VG     Fmt  Attr PSize   PFree
  /dev/sda2  rhel   lvm2 a--  <19.00g    0 
  /dev/sdd1  vgbook lvm2 a--   80.00m    0 
  /dev/sdd2  vgbook lvm2 a--  144.00m    0 
  /dev/sde   vgbook lvm2 a--  240.00m    0

8. View a summary of the logical volumes:

[root@server2 ~]# sudo lvs
  LV      VG     Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root    rhel   -wi-ao---- <17.00g                                                    
  swap    rhel   -wi-ao----   2.00g                                                    
  lvbook1 vgbook -wi-a----- 336.00m                                                    
  lvol0   vgbook -wi-a----- 128.00m 

Exercise 13-9: Rename, Reduce, Extend, and Remove Logical Volumes(server2)

  • Rename lvol0 to lvbook2.
  • Decrease the size of lvbook2 to 50MB using the lvreduce command
  • Add 32MB with the lvresize command.
  • remove both logical volumes.
  • display the summary for the volume groups, logical volumes, and physical volumes.

1. Rename lvol0 to lvbook2 using the lvrename command and confirm with lvs:

[root@server2 ~]# sudo lvrename vgbook lvol0 lvbook2
  Renamed "lvol0" to "lvbook2" in volume group "vgbook"

2. Reduce the size of lvbook2 to 50MB with the lvreduce command. Specify the absolute desired size for the logical volume. Answer “Do you really want to reduce vgbook/lvbook2?” in the affirmative.

[root@server2 ~]# sudo lvreduce -L 50 /dev/vgbook/lvbook2
  Rounding size to boundary between physical extents: 64.00 MiB.
  No file system found on /dev/vgbook/lvbook2.
  Size of logical volume vgbook/lvbook2 changed from 128.00 MiB (8 extents) to 64.00 MiB (4 extents).
  Logical volume vgbook/lvbook2 successfully resized.

3. Add 32MB to lvbook2 with the lvresize command:

[root@server2 ~]# sudo lvresize -L +32 /dev/vgbook/lvbook2
  Size of logical volume vgbook/lvbook2 changed from 64.00 MiB (4 extents) to 96.00 MiB (6 extents).
  Logical volume vgbook/lvbook2 successfully resized.

4. Use the pvs, lvs, vgs, and vgdisplay commands to view the updated allocation.

[root@server2 ~]# pvs
  PV         VG     Fmt  Attr PSize   PFree 
  /dev/sda2  rhel   lvm2 a--  <19.00g     0 
  /dev/sdd1  vgbook lvm2 a--   80.00m     0 
  /dev/sdd2  vgbook lvm2 a--  144.00m     0 
  /dev/sde   vgbook lvm2 a--  240.00m 32.00m
  
[root@server2 ~]# lvs
  LV      VG     Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root    rhel   -wi-ao---- <17.00g                                                    
  swap    rhel   -wi-ao----   2.00g                                                    
  lvbook1 vgbook -wi-a----- 336.00m                                                    
  lvbook2 vgbook -wi-a-----  96.00m  
 
[root@server2 ~]# vgs
  VG     #PV #LV #SN Attr   VSize   VFree 
  rhel     1   2   0 wz--n- <19.00g     0 
  vgbook   3   2   0 wz--n- 464.00m 32.00m
  
[root@server2 ~]# vgdisplay
  --- Volume group ---
  VG Name               vgbook
  System ID             
  Format                lvm2
  Metadata Areas        3
  Metadata Sequence No  8
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               0
  Max PV                0
  Cur PV                3
  Act PV                3
  VG Size               464.00 MiB
  PE Size               16.00 MiB
  Total PE              29
  Alloc PE / Size       27 / 432.00 MiB
  Free  PE / Size       2 / 32.00 MiB
  VG UUID               zRu1d2-ZgDL-bnzV-I9U1-0IFo-uM4x-w4bX0Q
   
  --- Volume group ---
  VG Name               rhel
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  3
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <19.00 GiB
  PE Size               4.00 MiB
  Total PE              4863
  Alloc PE / Size       4863 / <19.00 GiB
  Free  PE / Size       0 / 0   
  VG UUID               UiK3fy-FGOc-2fnP-C1Y6-JS0l-irEe-Sq3c4h

5. Remove both lvbook1 and lvbook2 logical volumes using the lvremove command. Use the -f option to suppress the “Do you really want to remove active logical volume” message.

[root@server2 ~]# sudo lvremove /dev/vgbook/lvbook1 -f
  Logical volume "lvbook1" successfully removed.
[root@server2 ~]# sudo lvremove /dev/vgbook/lvbook2 -f
  Logical volume "lvbook2" successfully removed.
  • Removing an LV is destructive
  • Backup any data in the target LV before deleting it.
  • You will need to unmount the file system or disable swap in the logical volume.
    6. Execute the vgdisplay command and grep for “Cur LV” to see the number of logical volumes currently available in vgbook. It should show 0, as you have removed both logical volumes.
[root@server2 ~]# sudo vgdisplay vgbook | grep 'Cur LV'
  Cur LV                0

Exercise 13-10: Reduce and Remove a Volume Group(server2)

\

  • Reduce vgbook by removing the sdd1 and sde physical volumes from it
  • Remove the volume group.
  • Confirm the deletion of the volume group and the logical volumes at the end.

1. Remove sdd1 and sde physical volumes from vgbook by issuing the vgreduce command:

[root@server2 ~]# sudo vgreduce vgbook /dev/sdd1 /dev/sde
  Removed "/dev/sdd1" from volume group "vgbook"
  Removed "/dev/sde" from volume group "vgbook"

2. Remove the volume group using the vgremove command. This will also remove the last physical volume, sdd2, from it.

[root@server2 ~]# sudo vgremove vgbook
  Volume group "vgbook" successfully removed
  • Use the -f option with the vgremove command to force the volume group removal even if it contains any number of logical and physical volumes in it.

3. Execute the vgs and lvs commands for confirmation:

[root@server2 ~]# sudo vgs
  VG   #PV #LV #SN Attr   VSize   VFree
  rhel   1   2   0 wz--n- <19.00g    0 
[root@server2 ~]# sudo lvs
  LV   VG   Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root rhel -wi-ao---- <17.00g                                                    
  swap rhel -wi-ao----   2.00g    

Exercise 13-11: Uninitialize Physical Volumes (Server2)\

  • Uninitialize all three physical volumes—sdd1, sdd2, and sde—by deleting the LVM structural information from them.
  • Use the pvs command for confirmation.
  • Remove the partitions from the sdd disk and
  • Verify that all disks used in Exercises 13-6 to 13-10 are now in their original raw state.

1. Remove the LVM structures from sdd1, sdd2, and sde using the pvremove command:

[root@server2 ~]# sudo pvremove /dev/sdd1 /dev/sdd2 /dev/sde
  Labels on physical volume "/dev/sdd1" successfully wiped.
  Labels on physical volume "/dev/sdd2" successfully wiped.
  Labels on physical volume "/dev/sde" successfully wiped.

2. Confirm the removal using the pvs command:

[root@server2 ~]# sudo pvs
  PV         VG   Fmt  Attr PSize   PFree
  /dev/sda2  rhel lvm2 a--  <19.00g    0 

The partitions and the disk are now back to their raw state and can be repurposed.

3. Remove the partitions from sdd using the parted command:

[root@server2 ~]# sudo parted /dev/sdd rm 1 ; sudo parted /dev/sdd rm 2
Information: You may need to update /etc/fstab.

Information: You may need to update /etc/fstab.  

4. Verify that all disks used in previous exercises have returned to their original raw state using the lsblk command:

[root@server2 ~]# lsblk                                                   
NAME          MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
sda             8:0    0   20G  0 disk 
├─sda1          8:1    0    1G  0 part /boot
└─sda2          8:2    0   19G  0 part 
  ├─rhel-root 253:0    0   17G  0 lvm  /
  └─rhel-swap 253:1    0    2G  0 lvm  [SWAP]
sdb             8:16   0  250M  0 disk 
sdc             8:32   0  250M  0 disk 
sdd             8:48   0  250M  0 disk 
sde             8:64   0  250M  0 disk 
sdf             8:80   0    5G  0 disk 
sr0            11:0    1  9.8G  0 rom  

Virtual Data Optimizer (VDO)

  • Used for storage optimization
  • Device driver layer that sits between the Linux kernel and the physical storage devices.
  • Conserve disk space, improve data throughput, and save on storage cost.
  • Employs thin provisioning, de-duplication, and compression technologies to help realize the goals.

How VDO Conserves Storage

Stage 1

  • Makes use of thin provisioning to identify and eliminate empty (zero-byte) data blocks. (zero-block elimination)
  • Removes randomization of data blocks by moving in-use data blocks to contiguous locations on the storage device.

Stage 2

  • If it detects that new data is an identical copy of some existing data, it makes an internal note of it but does not actually write the redundant data to the disk. (de-duplication)
  • Implemented with the inclusion of a kernel module called UDS (Universal De-duplication Service).

Stage 3

  • Calls upon another kernel module called kvdo, which compresses the residual data blocks and consolidates them on a lower number of blocks.
  • Results in a further drop in storage space utilization.
  • Runs in the background and processes inbound data through the three stages on VDO-enabled volumes.
  • Not a CPU or memory-intensive process

VDO Integration with LVM

  • LVM utilities have been enhanced to include options to support VDO volumes.

VDO Components

  • Utilizes the concepts of pool and volume. pool
  • logical volume that is created inside an LVM volume group using a deduplicated storage space. volume
  • Just like a regular LVM logical volume, but it is provisioned in a pool.
  • Needs to be formatted with file system structures before it can be used.

vdo and kmod-kvdo Commands

  • Create, mount, and manage LVM VDO volumes
  • Installed on the system by default.

vdo

  • Installs the tools necessary to support the creation and management of VDO volumes

kmod-kvdo

  • Implements fine-grained storage virtualization, thin provisioning, and compression. Not installed by default?

Exercise 13-12: Create an LVM VDO Volume

  • Initialize the 5GB disk (sdf) for use in LVM VDO.
  • Create a volume group called vgvdo and add the physical volume to it.
  • List and display the volume group and the physical volume.
  • Create a VDO volume called lvvdo with a virtual size of 20GB.

1. Initialize the sdf disk using the pvcreate command:

[root@server2 ~]# sudo pvcreate /dev/sdf
  Physical volume "/dev/sdf" successfully created.

2. Create vgvdo volume group using the vgcreate command:

[root@server2 ~]# sudo vgcreate vgvdo /dev/sdf
  Volume group "vgvdo" successfully created

3. Display basic information about the volume group:

[root@server2 ~]# sudo vgdisplay vgvdo
  Devices file sys_wwid t10.ATA_VBOX_HARDDISK_VB428913dd-446a194f PVID none last seen on /dev/sdd1 not found.
  Devices file sys_wwid t10.ATA_VBOX_HARDDISK_VB428913dd-446a194f PVID none last seen on /dev/sdd2 not found.
  --- Volume group ---
  VG Name               vgvdo
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <5.00 GiB
  PE Size               4.00 MiB
  Total PE              1279
  Alloc PE / Size       0 / 0   
  Free  PE / Size       1279 / <5.00 GiB
  VG UUID               tED1vC-Ylec-fpeR-KM8F-8FzP-eaQ4-AsFrgc

4. Create a VDO volume called lvvdo using the lvcreate command. Use the -l option to specify the number of logical extents (1279) to be allocated and the -V option for the amount of virtual space.

[root@server2 ~]# sudo dnf install kmod-kvdo
[root@server2 ~]# sudo lvcreate --type vdo -l 1279 -n lvvdo -V 20G vgvdo
    The VDO volume can address 2 GB in 1 data slab.
    It can grow to address at most 16 TB of physical storage in 8192 slabs.
    If a larger maximum size might be needed, use bigger slabs.
  Logical volume "lvvdo" created.

5. Display detailed information about the volume group including the logical volume and the physical volume:

[root@server2 ~]# sudo vgdisplay -v vgvdo
  Devices file sys_wwid t10.ATA_VBOX_HARDDISK_VB428913dd-446a194f PVID none last seen on /dev/sdd1 not found.
  Devices file sys_wwid t10.ATA_VBOX_HARDDISK_VB428913dd-446a194f PVID none last seen on /dev/sdd2 not found.
  --- Volume group ---
  VG Name               vgvdo
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  3
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <5.00 GiB
  PE Size               4.00 MiB
  Total PE              1279
  Alloc PE / Size       1279 / <5.00 GiB
  Free  PE / Size       0 / 0   
  VG UUID               tED1vC-Ylec-fpeR-KM8F-8FzP-eaQ4-AsFrgc
   
  --- Logical volume ---
  LV Path                /dev/vgvdo/vpool0
  LV Name                vpool0
  VG Name                vgvdo
  LV UUID                yGAsK2-MruI-QGy2-Q1IF-CDDC-XPNT-qkjJ9t
  LV Write Access        read/write
  LV Creation host, time server2, 2024-06-16 09:35:46 -0700
  LV VDO Pool data       vpool0_vdata
  LV VDO Pool usage      60.00%
  LV VDO Pool saving     100.00%
  LV VDO Operating mode  normal
  LV VDO Index state     online
  LV VDO Compression st  online
  LV VDO Used size       <3.00 GiB
  LV Status              NOT available
  LV Size                <5.00 GiB
  Current LE             1279
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
   
  --- Logical volume ---
  LV Path                /dev/vgvdo/lvvdo
  LV Name                lvvdo
  VG Name                vgvdo
  LV UUID                nnGTW5-tVFa-T3Cy-9nHj-sozF-2KpP-rVfnSq
  LV Write Access        read/write
  LV Creation host, time server2, 2024-06-16 09:35:47 -0700
  LV VDO Pool name       vpool0
  LV Status              available
  # open                 0
  LV Size                20.00 GiB
  Current LE             5120
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:4
   
  --- Physical volumes ---
  PV Name               /dev/sdf     
  PV UUID               0oAXHG-C4ub-Myou-5vZf-QxIX-KVT3-ipMZCp
  PV Status             allocatable
  Total PE / Free PE    1279 / 0

The output reflects the creation of two logical volumes: a pool called /dev/vgvdo/vpool0 and a volume called /dev/vgvdo/lvvdo.

Exercise 13-13: Remove a Volume Group and Uninitialize Physical Volume(Server2)

  • remove the vgvdo volume group along with the VDO volumes
  • uninitialize the physical volume /dev/sdf.
  • confirm the deletion.

1. Remove the volume group along with the VDO volumes using the vgremove command:

[root@server2 ~]# sudo vgremove vgvdo -f
  Logical volume "lvvdo" successfully removed.
  Volume group "vgvdo" successfully removed

Remember to proceed with caution whenever you perform erase operations.

2. Execute sudo vgs and sudo lvs commands for confirmation.

[root@server2 ~]# sudo vgs
  Devices file sys_wwid t10.ATA_VBOX_HARDDISK_VB428913dd-446a194f PVID none last seen on /dev/sdd1 not found.
  Devices file sys_wwid t10.ATA_VBOX_HARDDISK_VB428913dd-446a194f PVID none last seen on /dev/sdd2 not found.
  VG   #PV #LV #SN Attr   VSize   VFree
  rhel   1   2   0 wz--n- <19.00g    0 
  
[root@server2 ~]# sudo lvs
  Devices file sys_wwid t10.ATA_VBOX_HARDDISK_VB428913dd-446a194f PVID none last seen on /dev/sdd1 not found.
  Devices file sys_wwid t10.ATA_VBOX_HARDDISK_VB428913dd-446a194f PVID none last seen on /dev/sdd2 not found.
  LV   VG   Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root rhel -wi-ao---- <17.00g                                                    
  swap rhel -wi-ao----   2.00g  

3. Remove the LVM structures from sdf using the pvremove command:

[root@server2 ~]# sudo pvremove /dev/sdf
  Labels on physical volume "/dev/sdf" successfully wiped.

4. Confirm the removal by running sudo pvs.

[root@server2 ~]# sudo pvs
  Devices file sys_wwid t10.ATA_VBOX_HARDDISK_VB428913dd-446a194f PVID none last seen on /dev/sdd1 not found.
  Devices file sys_wwid t10.ATA_VBOX_HARDDISK_VB428913dd-446a194f PVID none last seen on /dev/sdd2 not found.
  PV         VG   Fmt  Attr PSize   PFree
  /dev/sda2  rhel lvm2 a--  <19.00g    0 

The disk is now back to its raw state and can be repurposed.

5. Verify that the sdf disk used in the previous exercises has returned to its original raw state using the lsblk command:

[root@server2 ~]# lsblk
NAME          MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
sda             8:0    0   20G  0 disk 
├─sda1          8:1    0    1G  0 part /boot
└─sda2          8:2    0   19G  0 part 
  ├─rhel-root 253:0    0   17G  0 lvm  /
  └─rhel-swap 253:1    0    2G  0 lvm  [SWAP]
sdb             8:16   0  250M  0 disk 
sdc             8:32   0  250M  0 disk 
sdd             8:48   0  250M  0 disk 
sde             8:64   0  250M  0 disk 
sdf             8:80   0    5G  0 disk 
sr0            11:0    1  9.8G  0 rom 

This brings the exercise to an end.

Storage DYI Labs

Lab 13-1: Create and Remove Partitions with parted

Create a 100MB primary partition on one of the available 250MB disks (lsblk) by invoking the parted utility directly at the command prompt. Apply label “msdos” if the disk is new.

[root@server20 ~]# sudo parted /dev/sdb mklabel msdos
Warning: The existing disk label on /dev/sdb will be destroyed and all data on this disk will be lost. Do you want to
continue?
Yes/No? yes                                                               
Information: You may need to update /etc/fstab.

[root@server20 ~]# sudo parted /dev/sdb mkpart primary 1 101m             
Information: You may need to update /etc/fstab.

Create another 100MB partition by running parted interactively while ensuring that the second partition won’t overlap the first.

[root@server20 ~]# parted /dev/sdb
GNU Parted 3.5
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) mkpart primary 101 201m                                         

Verify the label and the partitions.

(parted) print                                                            
Model: ATA VBOX HARDDISK (scsi)
Disk /dev/sdb: 262MB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags: 

Number  Start   End    Size    Type     File system  Flags
 1      1049kB  101MB  99.6MB  primary
 2      101MB   201MB  101MB   primary

Remove both partitions at the command prompt.

[root@server20 ~]# sudo parted /dev/sdb rm 1 rm 2

Lab 13-2: Create and Remove Partitions with gdisk

Create two 80MB partitions on one of the 250MB disks (lsblk) using the gdisk utility. Make sure the partitions won’t overlap.

Command (? for help): o
This option deletes all partitions and creates a new protective MBR.
Proceed? (Y/N): y

Command (? for help): p
Disk /dev/sdb: 512000 sectors, 250.0 MiB
Model: VBOX HARDDISK   
Sector size (logical/physical): 512/512 bytes
Disk identifier (GUID): 226F7476-7F8C-4445-9025-53B6737AD1E4
Partition table holds up to 128 entries
Main partition table begins at sector 2 and ends at sector 33
First usable sector is 34, last usable sector is 511966
Partitions will be aligned on 2048-sector boundaries
Total free space is 511933 sectors (250.0 MiB)

Number  Start (sector)    End (sector)  Size       Code  Name

Command (? for help): n
Partition number (1-128, default 1): 
First sector (34-511966, default = 2048) or {+-}size{KMGTP}: 
Last sector (2048-511966, default = 511966) or {+-}size{KMGTP}: +80M
Current type is 8300 (Linux filesystem)
Hex code or GUID (L to show codes, Enter = 8300): 
Changed type of partition to 'Linux filesystem'

Command (? for help): n
Partition number (2-128, default 2): 2
First sector (34-511966, default = 165888) or {+-}size{KMGTP}: 165888
Last sector (165888-511966, default = 511966) or {+-}size{KMGTP}: +80M
Current type is 8300 (Linux filesystem)
Hex code or GUID (L to show codes, Enter = 8300): 
Changed type of partition to 'Linux filesystem'

Verify the partitions.

Command (? for help): p
Disk /dev/sdb: 512000 sectors, 250.0 MiB
Model: VBOX HARDDISK   
Sector size (logical/physical): 512/512 bytes
Disk identifier (GUID): 226F7476-7F8C-4445-9025-53B6737AD1E4
Partition table holds up to 128 entries
Main partition table begins at sector 2 and ends at sector 33
First usable sector is 34, last usable sector is 511966
Partitions will be aligned on 2048-sector boundaries
Total free space is 184253 sectors (90.0 MiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048          165887   80.0 MiB    8300  Linux filesystem
   2          165888          329727   80.0 MiB    8300  Linux filesystem

Save

Command (? for help): w

Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING
PARTITIONS!!

Do you want to proceed? (Y/N): y
OK; writing new GUID partition table (GPT) to /dev/sdb.
The operation has completed successfully.

Delete the partitions

Command (? for help): d  
Partition number (1-2): 1

Command (? for help): d
Using 2

Command (? for help): w

Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING
PARTITIONS!!

Do you want to proceed? (Y/N): y
OK; writing new GUID partition table (GPT) to /dev/sdb.
The operation has completed successfully.

Lab 13-3: Create Volume Group and Logical Volumes

initialize 1x250MB disk for use in LVM (use lsblk to identify available disks).

root@server2 ~]# sudo parted /dev/sdd mklabel msdos
Warning: The existing disk label on /dev/sdd will be destroyed and all data
on this disk will be lost. Do you want to continue?
Yes/No? yes                                                               
Information: You may need to update /etc/fstab.

[root@server2 ~]# sudo parted /dev/sdd mkpart primary 1 250m              
Information: You may need to update /etc/fstab.

[root@server2 ~]# sudo parted /dev/sdd print                              
Model: ATA VBOX HARDDISK (scsi)
Disk /dev/sdd: 262MB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags: 

Number  Start   End    Size   Type     File system  Flags
 1      1049kB  250MB  249MB  primary
 
[root@server2 ~]# sudo pvcreate /dev/sdd1
  Physical volume "/dev/sdd1" successfully created.

(Can also just use the full disk without making it into a partition first.)

Create volume group vg100 with PE size 16MB and add the physical volume.

[root@server2 ~]# sudo vgcreate -vs 16 vg100 /dev/sdd1
  Wiping signatures on new PV /dev/sdd1.
  Adding physical volume '/dev/sdd1' to volume group 'vg100'
  Creating volume group backup "/etc/lvm/backup/vg100" (seqno 1).
  Volume group "vg100" successfully created

Create two logical volumes lvol0 and swapvol of sizes 90MB and 120MB.

[root@server2 ~]# sudo lvcreate -vL 90 vg100
  Creating logical volume lvol0
  Archiving volume group "vg100" metadata (seqno 1).
  Activating logical volume vg100/lvol0.
  activation/volume_list configuration setting not defined: Checking only host tags for vg100/lvol0.
  Creating vg100-lvol0
  Loading table for vg100-lvol0 (253:2).
  Resuming vg100-lvol0 (253:2).
  Wiping known signatures on logical volume vg100/lvol0.
  Initializing 4.00 KiB of logical volume vg100/lvol0 with value 0.
  Logical volume "lvol0" created.
  Creating volume group backup "/etc/lvm/backup/vg100" (seqno 2).

[root@server2 ~]# sudo lvcreate -l 8 -n swapvol vg100
  Logical volume "swapvol" created.

Use the vgs, pvs, lvs, and vgdisplay commands for verification.

[root@server2 ~]# lvs
  Devices file sys_wwid t10.ATA_VBOX_HARDDISK_VB428913dd-446a194f PVID none last seen on /dev/sdd1 not found.
  Devices file sys_wwid t10.ATA_VBOX_HARDDISK_VB428913dd-446a194f PVID none last seen on /dev/sdd2 not found.
  LV      VG    Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root    rhel  -wi-ao---- <17.00g                                                    
  swap    rhel  -wi-ao----   2.00g                                                    
  lvol0   vg100 -wi-a-----  90.00m                                                    
  swapvol vg100 -wi-a----- 120.00m                                                    
[root@server2 ~]# vgs
  Devices file sys_wwid t10.ATA_VBOX_HARDDISK_VB428913dd-446a194f PVID none last seen on /dev/sdd1 not found.
  Devices file sys_wwid t10.ATA_VBOX_HARDDISK_VB428913dd-446a194f PVID none last seen on /dev/sdd2 not found.
  VG    #PV #LV #SN Attr   VSize   VFree 
  rhel    1   2   0 wz--n- <19.00g     0 
  vg100   1   2   0 wz--n- 225.00m 15.00m
  
[root@server2 ~]# pvs
  Devices file sys_wwid t10.ATA_VBOX_HARDDISK_VB428913dd-446a194f PVID none last seen on /dev/sdd1 not found.
  Devices file sys_wwid t10.ATA_VBOX_HARDDISK_VB428913dd-446a194f PVID none last seen on /dev/sdd2 not found.
  PV         VG    Fmt  Attr PSize   PFree 
  /dev/sda2  rhel  lvm2 a--  <19.00g     0 
  /dev/sdd1  vg100 lvm2 a--  225.00m 15.00m
  
[root@server2 ~]# vgdisplay
  Devices file sys_wwid t10.ATA_VBOX_HARDDISK_VB428913dd-446a194f PVID none last seen on /dev/sdd1 not found.
  Devices file sys_wwid t10.ATA_VBOX_HARDDISK_VB428913dd-446a194f PVID none last seen on /dev/sdd2 not found.
  --- Volume group ---
  VG Name               vg100
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  5
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               225.00 MiB
  PE Size               15.00 MiB
  Total PE              15
  Alloc PE / Size       14 / 210.00 MiB
  Free  PE / Size       1 / 15.00 MiB
  VG UUID               fEUf8R-nxKF-Uxud-7rmm-JvSQ-PsN1-Mrs3zc
   
  --- Volume group ---
  VG Name               rhel
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  3
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <19.00 GiB
  PE Size               4.00 MiB
  Total PE              4863
  Alloc PE / Size       4863 / <19.00 GiB
  Free  PE / Size       0 / 0   
  VG UUID               UiK3fy-FGOc-2fnP-C1Y6-JS0l-irEe-Sq3c4h

Lab 13-4: Expand Volume Group and Logical Volume

Create a partition on an available 250MB disk and initialize it for use in LVM (use lsblk to identify available disks).

[root@server2 ~]# parted /dev/sdb mklabel msdos
Warning: The existing disk label on /dev/sdb will be destroyed and all data on this disk will be lost. Do you want to continue?
Yes/No? yes                                                               
Information: You may need to update /etc/fstab.

[root@server2 ~]# parted /dev/sdb mkpart primary 1 250m                   
Information: You may need to update /etc/fstab.

Add the new physical volume to vg100.

[root@server2 ~]# sudo vgextend vg100 /dev/sdb1
  Device /dev/sdb1 has updated name (devices file /dev/sdd1)
  Physical volume "/dev/sdb1" successfully created.
  Volume group "vg100" successfully extended

Expand the lvol0 logical volume to size 300MB.

[root@server2 ~]# lvextend -L +210 /dev/vg100/lvol0
  Size of logical volume vg100/lvol0 changed from 90.00 MiB (6 extents) to 300.00 MiB (20 extents).
  Logical volume vg100/lvol0 successfully resized.

Use the vgs, pvs, lvs, and vgdisplay commands for verification.

[[root@server2 ~]# lvs
  Devices file sys_wwid t10.ATA_VBOX_HARDDISK_VB428913dd-446a194f PVID none last seen on /dev/sdd2 not found.
  LV      VG    Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root    rhel  -wi-ao---- <17.00g                                                    
  swap    rhel  -wi-ao----   2.00g                                                    
  lvol0   vg100 -wi-a-----  90.00m                                                    
  swapvol vg100 -wi-a----- 120.00m](<[root@server20 ~]# lvs
  LV      VG    Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root    rhel  -wi-ao---- %3C17.00g                                                    
  swap    rhel  -wi-ao----   2.00g                                                    
  lvol0   vg100 -wi-a----- 300.00m                                                    
  swapvol vg100 -wi-a----- 120.00m>)                                                  
[root@server2 ~]# vgs
  Devices file sys_wwid t10.ATA_VBOX_HARDDISK_VB428913dd-446a194f PVID none last seen on /dev/sdd2 not found.
  VG    #PV #LV #SN Attr   VSize   VFree 
  rhel    1   2   0 wz--n- <19.00g     0 
  vg100   2   2   0 wz--n- 450.00m 30.00m
  
[root@server2 ~]# pvs
  Devices file sys_wwid t10.ATA_VBOX_HARDDISK_VB428913dd-446a194f PVID none last seen on /dev/sdd2 not found.
  PV         VG    Fmt  Attr PSize   PFree 
  /dev/sda2  rhel  lvm2 a--  <19.00g     0 
  /dev/sdb1  vg100 lvm2 a--  225.00m 30.00m
  /dev/sdd1  vg100 lvm2 a--  225.00m     0 
  
[root@server2 ~]# lvs
  Devices file sys_wwid t10.ATA_VBOX_HARDDISK_VB428913dd-446a194f PVID none last seen on /dev/sdd2 not found.
  LV      VG    Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root    rhel  -wi-ao---- <17.00g                                                    
  swap    rhel  -wi-ao----   2.00g                                                    
  lvol0   vg100 -wi-a----- 300.00m                                                    
  swapvol vg100 -wi-a----- 120.00m                                                    
[root@server2 ~]# vgdisplay
  Devices file sys_wwid t10.ATA_VBOX_HARDDISK_VB428913dd-446a194f PVID none last seen on /dev/sdd2 not found.
  --- Volume group ---
  VG Name               vg100
  System ID             
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  7
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               0
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               450.00 MiB
  PE Size               15.00 MiB
  Total PE              30
  Alloc PE / Size       28 / 420.00 MiB
  Free  PE / Size       2 / 30.00 MiB
  VG UUID               fEUf8R-nxKF-Uxud-7rmm-JvSQ-PsN1-Mrs3zc
   
  --- Volume group ---
  VG Name               rhel
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  3
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <19.00 GiB
  PE Size               4.00 MiB
  Total PE              4863
  Alloc PE / Size       4863 / <19.00 GiB
  Free  PE / Size       0 / 0   
  VG UUID               UiK3fy-FGOc-2fnP-C1Y6-JS0l-irEe-Sq3c4h
   

Lab 13-5: Add a VDO Logical Volume

initialize the sdf disk for use in LVM and add it to vgvdo1.

[root@server2 ~]# pvcreate /dev/sdc
  Physical volume "/dev/sdc" successfully created.
  
[root@server2 ~]# sudo vgextend vgvdo1 /dev/sdc
  Volume group "vgvdo1" successfully extended

Create a VDO logical volume named vdovol using the entire disk capacity.

[root@server2 ~]# lvcreate --type vdo -n vdovol -l 100%FREE vgvdo1
WARNING: LVM2_member signature detected on /dev/vgvdo1/vpool0 at offset 536. Wipe it? [y/n]: y
  Wiping LVM2_member signature on /dev/vgvdo1/vpool0.
    Logical blocks defaulted to 523108 blocks.
    The VDO volume can address 2 GB in 1 data slab.
    It can grow to address at most 16 TB of physical storage in 8192 slabs.
    If a larger maximum size might be needed, use bigger slabs.
  Logical volume "vdovol" created.

Use the vgs, pvs, lvs, and vgdisplay commands for verification.

[root@server2 ~]# vgs
  Devices file sys_wwid t10.ATA_VBOX_HARDDISK_VB428913dd-446a194f PVID none last seen on /dev/sdd2 not found.
  Devices file sys_wwid t10.ATA_VBOX_HARDDISK_VB123ecea1-63467dee PVID RjcGRyHDIWY0OqAgfIHC93WT03Na1WoO last seen on /dev/sdd1 not found.
  Devices file sys_wwid t10.ATA_VBOX_HARDDISK_VBa5e3cbf7-10921e08 PVID qeP9dCevNnTy422I8p18NxDKQ2WyDodU last seen on /dev/sdf1 not found.
  Devices file sys_wwid t10.ATA_VBOX_HARDDISK_VB428913dd-446a194f PVID brKVLFEG3AoBzhWoso0Sa1gLYHgNZ4vL last seen on /dev/sdb1 not found.
  VG     #PV #LV #SN Attr   VSize   VFree  
  rhel     1   2   0 wz--n- <19.00g      0 
  vgvdo1   2   2   0 wz--n-  <5.24g 248.00m

Lab 13-6: Reduce and Remove Logical Volumes

reduce the size of vdovol logical volume to 80MB.

[root@server2 ~]# lvreduce -L 80 /dev/vgvdo1/vdovol
  No file system found on /dev/vgvdo1/vdovol.
  WARNING: /dev/vgvdo1/vdovol: Discarding 1.91 GiB at offset 83886080, please wait...
  Size of logical volume vgvdo1/vdovol changed from 1.99 GiB (510 extents) to 80.00 MiB (20 extents).
  Logical volume vgvdo1/vdovol successfully resized.
[root@server2 ~]# lvs
  Devices file sys_wwid t10.ATA_VBOX_HARDDISK_VB428913dd-446a194f PVID none last seen on /dev/sdd2 not found.
  Devices file sys_wwid t10.ATA_VBOX_HARDDISK_VB123ecea1-63467dee PVID RjcGRyHDIWY0OqAgfIHC93WT03Na1WoO last seen on /dev/sdd1 not found.
  Devices file sys_wwid t10.ATA_VBOX_HARDDISK_VBa5e3cbf7-10921e08 PVID qeP9dCevNnTy422I8p18NxDKQ2WyDodU last seen on /dev/sdf1 not found.
  Devices file sys_wwid t10.ATA_VBOX_HARDDISK_VB428913dd-446a194f PVID brKVLFEG3AoBzhWoso0Sa1gLYHgNZ4vL last seen on /dev/sdb1 not found.
  LV     VG     Attr       LSize   Pool   Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root   rhel   -wi-ao---- <17.00g                                                      
  swap   rhel   -wi-ao----   2.00g                                                      
  vdovol vgvdo1 vwi-a-v---  80.00m vpool0        0.00                                   
  vpool0 vgvdo1 dwi-------  <5.00g               60.00                                  
[root@server2 ~]# 

erase logical volume vdovol.

[root@server2 ~]# lvremove /dev/vgvdo1/vdovol
Do you really want to remove active logical volume vgvdo1/vdovol? [y/n]: y
  Logical volume "vdovol" successfully removed.

Confirm the deletion with vgs, pvs, lvs, and vgdisplay commands.

Lab 13-7: Remove Volume Group and Physical Volumes

\remove the volume group and uninitialized the physical volumes.

[root@server2 ~]# vgremove vgvdo1
  Volume group "vgvdo1" successfully removed
[root@server2 ~]# pvremove /dev/sdc
  Labels on physical volume "/dev/sdc" successfully wiped.
[root@server2 ~]# pvremove /dev/sdf
  Labels on physical volume "/dev/sdf" successfully wiped.

Confirm the deletion with vgs, pvs, lvs, and vgdisplay commands.

Use the lsblk command and verify that the disks used for the LVM labs no longer show LVM information.

[root@server2 ~]# lsblk
NAME          MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
sda             8:0    0   20G  0 disk 
├─sda1          8:1    0    1G  0 part /boot
└─sda2          8:2    0   19G  0 part 
  ├─rhel-root 253:0    0   17G  0 lvm  /
  └─rhel-swap 253:1    0    2G  0 lvm  [SWAP]
sdb             8:16   0  250M  0 disk 
sdc             8:32   0  250M  0 disk 
sdd             8:48   0  250M  0 disk 
sde             8:64   0  250M  0 disk 
sdf             8:80   0    5G  0 disk 
sr0            11:0    1  9.8G  0 rom  

Subsections of System

Installation

Chapter 1 RHCSA Notes - Installation

About RHEL9

  • Kernel 5.14
  • Released May 2019
  • Built along side of Fedora 34
  • Installer program = Anaconda
  • Default Bootloader = GRUB2
  • Default automatic partitioning = /boot, /, swap
  • Default desktop environment = GNOME

Installation Logs

/root/anaconda-ks.cfg Configuration entered

/var/log/anaconda/anaconda.log Contains informational, debug, and other general messages

/var/log/anaconda/journal.log Stores messages generated by many services and components during system installation

/var/log/anaconda/packaging.log Records messages generated by the dnf and rpm commands during software installation

/var/log/anaconda/program.log Captures messages generated by external programs

/var/log/anaconda/storage.log Records messages generated by storage modules

/var/log/anaconda/syslog Records messages related to the kernel

/var/log/anaconda/X.log Stores X Window System information

Note: Logs are created in /tmp then transferred over to /var/log/anaconda once the install is finished.

6 Virtual Consoles

  • Monitor the installation process.
  • View diagnostic messages.
  • Discover and fix any issues encountered.
  • Information displayed on the console screens is captured in installation log files.

Console 1 (Ctrl+Alt+F1)

  • Main screen
  • Select language
  • Then switches default console to 6

Console 2 (Ctrl_Alt+F2)

  • Shell interface for root user

Console 3 (Ctrl_Alt+F3)

  • Displays install messages
  • Stores them in /tmp/anaconda.log
  • Info on detected hardware, etc.

Console 4 (Ctrl_Alt+F4)

  • Shows storage messages
  • Stores them in /tmp/storage.log

Console 5 (Ctrl_Alt+F5)

  • Program messages
  • Stores them in /tmp/program.log

Console 6 (Ctrl_Alt+F6)

  • Default Graphical configuration and installation console screen

Console 1 Brings you to the log in screen. Console 2 does nothing. Console 3-6 all bring you to this log in screen

Lab Setup

VM1

server1.example.om 
192.168.0.110 
Memory: 2GB 
Storage: 1x20GB 
2 vCPUs

VM2

server2.exmple.om 
192.168.0.120 
Memory: 2048 
Storage: 1x20GB 
	4x250 MB data disk 
	1x5GB data disk 
2 vCPUs

Setting up VM1

Download the disc iso on Redhat’s website: https://access.redhat.com/downloads/content/rhel

Name RHEL9-VM1 Accept defaults.

Set drive to 20 gigs

press “spe” to hlt utooot

Selet instll

selet lnguge

onfigure timezone under time & dte

go into instlltion destintion nd li “done”

Networ nd hostnme settings

  1. hnge the hostnme to server1.exmple.om
  2. go to IPv4 settings in networ nd host nd set to mnul ddress: 192.168.0.110 netms 24 gtewy 192.168.0.1 then sve
  3. slide the on/off swith in the min menu to on

Set root pssword

Chnge the oot order

  1. power off the vm
  2. Set oot sequene to hrd dis first then optil, remove floppy

Accept license terms and rete user

ssh from host os with putty

Issue these Commnds after set up

whoami 
hostname 
pwd 
logout or ctrl+d

Using cockpit

  • Web gui for managing RHEL system
  • Comes pre-installed
    • if not then install with:
    sudo dnf install cockpit
  • must enable cockpit socket
    sudo systemctl enable --now cockpit.socket
  • https://yourip:9090

Labs

Lab:

Enable cockpit.socket:

sudo systemctl enable --now cockpit.socket

In a web browser, go to https://<your-ip>:9090

Interaction

Looking to get started using Fedora or Red Hat operating systems?

This guide with get you started with the RHEL Graphical environment, file system, and essential commands to get started using Fedora, Red Hat, or other RHEL based systems.

RedHat (RHEL9) Graphical Environment (Wayland)

Redhat runs a graphical environment called Wayland. This is the foundation for running GUI apps. Wayland is a client/server display protocol. Which just means that the user (the client) requests a resource and the display manager (the server) serves those resources.

Wayland is slowly replaced and older display protocol called “X”. And has better graphics capabilities, features, and performance than X. And consists of a Display or Login manager and a Desktop environment.

The Display/ Login manager presents the login screen for users to log in. Once you log in, you get to the pre-configured desktop manager or Desktop Environment (DE). The GNOME Display Manager. (GDM)

File System and Directory Hierarchy

The standard for the Linux filesystem is the Filesystem Hierarchy Standard (FHS). Which describes locations, names, and permissions for a variety of file types and directories.

The directory structure starts at the root. Which is notated by a “/”. The top levels of the directory can be viewed by running the ls command on the root of the directory tree.

Size of the root file system is automatically determined by the installer program based on the available disk space when you select the default partitioning (it may be altered). Here is a listing of the contents of /:

$ ls /
afs  bin  boot  dev  etc  home  lib  lib64  lost+found  media  mnt  opt  proc  root  run  sbin  snap  srv  sys  tmp  usr  var

Some of these directories hold static data such as commands, configuration files, kernel and device files, etc. And some hold dynamic data such as log and status files.

There are three major categories of file systems. They are:

  1. disk-based
  2. network-based
  3. memory-based

Disk-based files systems are physical media such as a hard drive or a USB flash drive and store information persistently. The root and boot file systems and both disk-based and created automatically when you select the default partitioning.

Network-Based file systems are disk-based file systems that are shared over the network for remote access. (Also stored persistently)

Memory-Based filesystems are virtual. And are created automatically at system startup and destroyed when the system goes down.

Key Directories in /

/etc (extended text configuration)

This directory contains system configuration files for systemd, LVM, and user shell startup template files.

david@fedora:$ ls /etc
abrt                    dhcp                        gshadow-       locale.conf               openldap            request-key.d          sysctl.conf
adjtime                 DIR_COLORS                  gss            localtime                 opensc.conf         resolv.conf            sysctl.d
aliases                 DIR_COLORS.lightbgcolor     gssproxy       login.defs                opensc-x86_64.conf  rpc                    systemd
alsa                    dleyna-server-service.conf  host.conf      logrotate.conf            openvpn             rpm                    system-release
alternatives            dnf                         hostname       logrotate.d               opt                 rpmdevtools            system-release-cpe
anaconda                dnsmasq.conf                hosts          lvm                       os-release          rpmlint                tcsd.conf
anthy-unicode.conf      dnsmasq.d                   hp             machine-id                ostree              rsyncd.conf            terminfo
apk                     dracut.conf                 httpd          magic                     PackageKit          rwtab.d                thermald
appstream.conf          dracut.conf.d               idmapd.conf    mailcap                   pam.d               rygel.conf             timidity++.cfg
asound.conf             egl                         ImageMagick-7  makedumpfile.conf.sample  paperspecs          samba                  tmpfiles.d
audit                   environment                 init.d         man_db.conf               passwd              sane.d                 tpm2-tss
authselect              ethertypes                  inittab        mcelog                    passwd-             sasl2                  Trolltech.conf
avahi                   exports                     inputrc        mdevctl.d                 passwdqc.conf       security               trusted-key.key
bash_completion.d       exports.d                   ipp-usb        mercurial                 pinforc             selinux                ts.conf
bashrc                  favicon.png                 iproute2       mime.types                pkcs11              services               udev
bindresvport.blacklist  fedora-release              iscsi          mke2fs.conf               pkgconfig           sestatus.conf          udisks2
binfmt.d                filesystems                 issue          modprobe.d                pki                 sgml                   unbound
bluetooth               firefox                     issue.d        modules-load.d            plymouth            shadow                 updatedb.conf
brlapi.key              firewalld                   issue.net      mono                      pm                  shadow-                UPower
brltty                  flatpak                     java           motd                      polkit-1            shells                 uresourced.conf
brltty.conf             fonts                       jvm            motd.d                    popt.d              skel                   usb_modeswitch.conf
ceph                    fprintd.conf                jvm-common     mtab                      ppp                 sos                    vconsole.conf
chkconfig.d             fstab                       kdump          mtools.conf               printcap            speech-dispatcher      vdpau_wrapper.cfg
chromium                fuse.conf                   kdump.conf     my.cnf                    profile             ssh                    vimrc
chrony.conf             fwupd                       kernel         my.cnf.d                  profile.d           ssl                    virc
cifs-utils              gcrypt                      keys           nanorc                    protocols           sssd                   vmware-tools
containers              gdbinit                     keyutils       ndctl                     pulse               statetab.d             vpl
credstore               gdbinit.d                   krb5.conf      ndctl.conf.d              qemu                subgid                 vpnc
credstore.encrypted     gdm                         krb5.conf.d    netconfig                 qemu-ga             subgid-                vulkan
crypto-policies         geoclue                     ld.so.cache    NetworkManager            rc0.d               subuid                 wgetrc
crypttab                glvnd                       ld.so.conf     networks                  rc1.d               subuid-                whois.conf
csh.cshrc               gnupg                       ld.so.conf.d   nfs.conf                  rc2.d               subversion             wireplumber
csh.login               GREP_COLORS                 libaudit.conf  nfsmount.conf             rc3.d               sudo.conf              wpa_supplicant
cups                    groff                       libblockdev    nftables                  rc4.d               sudoers                X11
cupshelpers             group                       libibverbs.d   nilfs_cleanerd.conf       rc5.d               sudoers.d              xattr.conf
dbus-1                  group-                      libnl          npmrc                     rc6.d               swid                   xdg
dconf                   grub2.cfg                   libreport      nsswitch.conf             rc.d                swtpm-localca.conf     xml
debuginfod              grub2-efi.cfg               libssh         nvme                      reader.conf.d       swtpm-localca.options  yum.repos.d
default                 grub.d                      libuser.conf   odbc.ini                  redhat-release      swtpm_setup.conf       zfs-fuse
depmod.d                gshadow                     libvirt        odbcinst.ini              request-key.conf    sysconfig

As you can see, there is a lot of stuff here.

/root

This is the default home directory for the root user.

/mnt

/mnt is used to temporarily mount a file system.

/boot (Disk-Based)

This directory contains the Linux Kernel, as well as boot support and configuration files.

The size of /boot is determined by the installer program based on the available disk space when you select the default partitioning. It may be set to a different size during or after the installation.

/home

This is used to store user home directories and other user contents.

/opt (Optional)

This directory holds additional software that may need to be installed on the system. A sub directory is created for each installed software.

/usr (UNIX System Resources)

Holds most of the system files such as:

/usr/bin

Binary directory for user executable commands

/usr/sbin

System binaries required at boot and system administration commands not intended for execution by normal users. This directory is not included in the default search path for normal users.

/usr/lib and /usr/lib64

Contain shared library routines required by many commands/programs located in /usr/bin and /usr/sbin. These are used by kernel and other applications and programs for their successful installation and operation.

/usr/lib directory also stores system initialization and service management programs. /usr/lib64 contains 64-bit shared library routines.

/usr/include

Contains header files for the C programming language.

/usr/local:

This is a system administrator repository for storing commands and tools. These commands not generally included with the original Linux distribution.

Directory Contains
/usr/local/bin ecutables
/usr/local/etc configuration files
/usr/local/lib and /usr/local/lib64 library routines
/usr/share manual pages, documentation, sample templates, configuration files
/usr/src:

This directory is used to store source code.

Variable Directory (/var)

For data that frequently changes while the system is operational. Such as log, status, spool, lock, etc.

Common sub directories in /var:

/var/log

Contains most system log files. Such as boot logs, user logs, failed user logs, installation logs, cron logs, mail logs, etc.

/var/opt

Log, status, etc. for software installed in /opt.

/var/spool

Queued files such as print jobs, cron jobs, mail messages, etc.

/var/tmp

For large or longer term temporary files that need to survive system reboots. These are deleted if they are not accessed for a period of 30 days.

/tmp (Temporary)

Temporary files that survive system reboots. These are deleted after 10 days if they are not accessed. Programs may need to create temporary files in order to run.

/dev (Devices)

Contains Device nodes for physical and virtual devices. Linux kernel talks to devices through these nodes. Device nodes are automatically created and deleted by the udevd service. Which dynamically manages devices.

The two types of device files are character (or raw) and block.

Character devices

  • Accessed serially.
  • Console, serial printers, mice, keyboards, terminals, etc.

Block devices

  • Accessed in a parallel fashion with data exchanged in blocks.
  • Data on block devices is accessed randomly.
  • Hard disk drives, optical drives, parallel printers, etc.

Procfs File System (/proc)

  • Config and status info on:
    • Kernel, CPU, memory, disks, partitioning, file systems, networking, running processes, etc.
  • Zero-length pseudo files point to data maintained by the kernel in the memory.
  • Interface to interact with kernel-maintained information.
  • Contents created in memory at system boot time, updated during runtime, and destroyed at system shutdown.

Runtime File System (/run)

  • Data for processes running on the system.
    • /run/media
  • Used to automatically mount external file systems (CD, DVD, flash USB.)
  • Contents deleted at shutdown.

The System File System (/sys)

  • Info about hardware devices, drivers, and some kernel features.
  • Used by the kernel to load necessary support for devices, create device nodes in /dev, and configure devices.
  • Auto-maintained.

Essential System Commands

tree command

  • List hierarchy of directories and files.
  • Column 2
    • Size.
  • Column 3
    • Full path.

Options. tree -a :: Include hidden files in the output. tree -d :: Exclude files from the output. tree -h :: Displays file sizes in human-friendly format. tree -f :: Prints the full path for each file. tree -p :: Includes file permissions in the output

Labs

List only the directories (-d) in the root user’s home directory (/root).

tree -d /root

List files in the /etc/sysconfig directory along with their permissions, sizes in human-readable format, and full path.

tree -phf /etc/sysconfig

View tree man pages.

man tree

Prompt Symbols

  • Hash sign (#) for root user.
  • Dollar sign ($) for normal users.

Linux Commands

Two types of commands:

  1. User
    • General purpose.
    • For any user.
  2. System Management
    • Superuser.
    • Require elevated privileges.

Command Mechanics

Basic Syntax

  • command option(s) argument(s)
  • Many commands have preconfigured default options and arguments.

An option that starts with a single hyphen character (-la, for instance) ::: Short-option format.

  • Two hyphen characters (–all, for instance) ::: Long-option format.

Listing Files and Directories

ls

  • ll :: shortcut for ls -l

Flags ls -l ::: View long listing format. ls -d ::: View info on the specified directory. ls -h ::: Human readable format. ls -a ::: List all files, including the hidden files. ls -t ::: Sort output by date and time with the newest file first. ls -R ::: List contents recursively. ls -i ::: View inode information.

labs:

Show the long listing of only /usr without showing its contents.

ls -ld /usr

Display all files in the current directory with their sizes in human-friendly format.

ls -lh

List all files, including the hidden files, in the current directory with detailed information.

ls -la

Sort output by date and time with the newest file first.

ls -lt

List contents of the /etc directory recursively.

ls -R /etc

List directory info and the contents of a directory recursively.

ls -lR /etc

View ls manpage.

man ls

Printing Working Directory (pwd) command

  • Returns the absolute path to a file or directory.

Absolute path (full path or a fully qualified pathname) :: Points to a file or directory in relation to the top of the directory tree. It always starts with the forward slash (/).

Relative path :: Points to a file or directory in relation to your current location.

Labs:

Go one level up into the parent directory using the relative path

cd ..

cd into /etc/sysconfig using the absolute path (/etc/sysconfig), or the relative path (etc/sysconfig)

cd /etc/sysconfig
cd /
cd etc/sysconfig

Change into the /usr/bin directory from /etc/sysconfig using relative or absolute path

cd /usr/bin

or

cd ../usr/bin

Return to your home directory

cd

or

cd ~

Use the absolute path to change into the home directory of the root user from /etc/sysconfig

cd ../../root

Switch between the current and previous directories

cd ..

use the cd command to print the home directory of the current user

cd -

Terminal Device Files

  • Unique pseudo (or virtual) numbered device files that represent terminal sessions opened by users.
  • Used to communicate with individual sessions.
  • Stored in the /dev/pts/ (pseudo terminal session).
  • Created when a user opens a new terminal session.
  • Removed when a session closes.

tty command

  • Identify current terminal session.
  • Displays filename and location.
  • Example: /dev/pts/0

Inspecting System’s Uptime and Processor Load

uptime command

  • Displays:
    • System’s current time.
    • System up time.
    • Number of users currently logged in.
    • Average % CPU load over the past 1, 5, and 15 minutes.
      • 0.00 and 1.00 represent no load and full load.
      • Greater than 1.00 signifies excess load (over 100%).

clear command

  • Clears the terminal screen and places the cursor at the top left of the screen.
  • Can also use Ctrl+l for this command.
clear

Determining Command Path

Tools for identifying the absolute path of the command that will be executed when you run it without specifying its full path.

which, whereis, and type

show the full location of the ls command:

which command

  • Show command aliases and location.
[root@server1 bin]# which ls
alias ls='ls --color=auto'
        /usr/bin/ls

whereis command

  • Locates binary, source, and manual files for specified command name.
[root@server1 bin]# whereis ls
ls: /usr/bin/ls /usr/share/man/man1/ls.1.gz /usr/share/man/man1p/ls.1p.gz>)

type command

  • Find whether the given command is an alias, shell built-in, file, function, or keyword.
type ls

Viewing System Information

uname command

  • Show system operating system name.
[root@server1 bin]# uname
Linux

Flags uname -s ::: Show kernel name. uname -n ::: Show hostname. uname -r ::: Show kernel release. uname -v ::: Show kernel build date. uname -m ::: Show machine hardware name. uname -p ::: Show processor type. uname -i ::: Show hardware platform. uname -o ::: Show OS name. uname -a ::: Show kernel name, nodename, release, version, machine, and os.

uname
uname -a
Linux = Kernel name
server1.example.com = Hostname of the system
4.18.0-80.el8.x86_64 = Kernel release
#1 SMP Wed Mar 13 12:02:46 UTC 2019 = Date and time of the kernel built
x86_64 = Machine hardware name
x86_64 = Processor type
x86_64 = Hardware platform
GNU/Linux = Operating system name

Viewing CPU Specs

lscpu command

  • Shows CPU:
    • Architecture.
    • Operating modes.
    • Vendor.
    • Family.
    • Model.
    • Speed.
    • Cache memory.
    • Virtualization support type.
lscpu
architecture of the CPU (x86_64)
supported modes of operation (32-bit and 64-bit)
sequence number of the CPU on this system (1)
threads per core (1)
cores per socket (1)
number of sockets (1)
vendor ID (GenuineIntel)
CPU model (58) model name (Intel …)
speed (2294.784 MHz)
amount and levels of cache memory (L1d, L1i, L2, and L3)

Getting Help

Manual pages

  • Informational pages stored in /usr/share/man for each program.

See Using Man Pages for more.

man command

Flags: -k

  • Perform a keyword search on manual pages.
  • Must build the database with mandb first.

-f

  • Equivalent to whatis.

Commands to find information/help about programs.

  • apropos
  • whatis
  • info
  • pinfo

/usr/share/doc/

  • Directory with additional program documentation.
man passwd

line at the bottom indicates the line number of the manual page.

Man page navigation

h ::: Help on navigation. q ::: Quit the man page. Up arrow key ::: Scroll up one line. Enter or Down arrow key ::: Scroll down one line. f / Spacebar / Page down ::: Move forward one page. b / Page up ::: Move backward one page. d / u ::: Move down/up half a page. g / G ::: Move to the beginning / end of the man pages. :f ::: Display line number and bytes being viewed. /pattern ::: Searches forward for the specified pattern. ?pattern ::: Searches backward for the specified pattern. n / N ::: Find the next / previous occurrence of a pattern.

Headings in the Manual

NAME

  • Name of the command or file with a short description. SYNOPSIS
  • Syntax summary. DESCRIPTION
  • Overview of the command or file. OPTIONS
  • Options available for use. EXAMPLES
  • Some examples to explain the usage. FILES
  • A list of related files. SEE ALSO
  • Reference to other manual pages or topics. BUGS
  • Any reported bugs or issues. AUTHOR
  • Contributor information.

Manual Sections

  • Manual information is split into nine sections for organization and clarity.
  • Man searches through each section until it finds a match.
    • Starts at section 1, then section 2, etc.
  • Some commands in Linux also have a configuration file with an identical name.
    • Ex: passwd command in /usr/bin and the passwd file in /etc.
  • Specify the section to find that page only.
    • Ex: man 5 passwd
  • Section number is located at the top (header) of the page.

Section 1

  • Refers to user commands. Section 4
  • Contains special files. Section 5
  • Describes file formats for many system configuration files. Section 8
  • Documents system administration and privileged commands designed for the root user.

Run man man for more details.

Searching by Keyword

apropos command

  • Search all sections of the manual pages and show a list of all entries matching the specified keyword in their names or descriptions.
  • Must mandb command in order to build an indexed database of the manual pages prior to using.
mandb

mandb command

  • Build an indexed database of the manual pages.

Lab: Find a forgotten XFS administration command.

man -k xfs
or
apropos xfs

Lab: Show a brief list of options and a description.

passwd --help
or
passwd -?

whatis command

  • Same output as man -f
  • Display one-line manual page descriptions.

info and pinfo Commands

  • Display command detailed documentation.
  • Divided into sections called nodes.
  • Header:
    • Name of the file being displayed.
    • Names of the current, next, and previous nodes.
  • Almost identical to each other.
info ls

u navigate efficiently.

info page Navigation

Down / Up arrows

  • Move forward / backward one line. Spacebar / Del
  • Move forward / backward one page. q
  • Quit the info page. t
  • Go to the top node of the document. s
  • Search

Documentation in /usr/share/doc/

/usr/share/doc/

  • Stores general documentation for installed packages under subdirectories that match their names.
ls -l /usr/share/doc/gzip

Online RHEL Documentation

  • docs.redhat.com
  • Release notes and guides on planning, installation, administration, security, storage management, virtualization, etc.
  • access.redhat.com

Labs

Lab 2: Navigate Linux Directory Tree

Check your location in the directory tree.

pwd

Show file permissions in the current directory including the hidden files.

ls -la

Change directory into /etc and confirm the directory change.

cd /etc
pwd

Switch back to the directory where you were before, and run pwd again to verify.

cd -
pwd

Lab: Miscellaneous Tasks

Identify the terminal device file.

tty

Open a couple of terminal sessions. Compare the terminal numbers.

tty
/dev/pts/1

Execute the uptime command and analyze the system uptime and processor load information.

uptime

Use three commands to identify the location of the vgs command.

which vgs
whereis vgs
type vgs

Lab: Identify System and Kernel Information

  1. Analyze the basic information about the system and kernel reported.
uname -a

Examine the key items relevant to the processor.

lscpu

Lab: Man

View man page for uname.

man uname

View the 5 man page section for the shadow.

man 5 shadow

Process and Task Scheduling

Processes and Priorities

Process

  • a unit for provisioning system resources.
  • any program, application, or command that runs on the system.
  • created in memory when a program, application, or command is initiated.
  • organized in a hierarchical fashion.
  • Each process has a parent process (a.k.a. a calling process) that spawns it.
  • A single parent process may have one or many child processes
    • passes many of its attributes to them at the time of their creation.
  • Each process is assigned an exclusive identification number (Process IDentifier (PID))
    • is used by the kernel to manage and control the process through its lifecycle.
  • When a process completes its lifespan or is terminated, this event is reported back to its parent process, and all the resources provisioned to it (cpu cycles, memory, etc.) are then freed and the PID is removed from the system.
  • background system processes are called daemons
    • which sit in the memory and wait for an event to trigger a request to use their services.
  • /proc
    • Where information for each running process is recorded and maintained.
    • Referenced by ps and other commands

Process States

  • Five basic process states:
    • running
      • being executed by the system CPU.
    • sleeping
      • waiting for input from a user or another process.
    • waiting
      • has received the input it was waiting for and is now ready to run as soon as its turn comes.
    • stopped
      • currently halted and will not run even when its turn comes unless a signal is sent to change its behavior.
    • zombie
      • Dead.
      • Exists in the process table alongside other process entries
      • takes up no resources.
      • entry is retained until its parent process permits it to die
      • also called a defunct process.

ps command

  • Lists processes specific to the terminal where this command is issued.
  • Shows:
    • PID
    • terminal (TTY) the process spawned in
    • cumulative time (TIME) the system CPU has given to the process
    • name of the command or program (CMD) being executed.
    • may be customized to view only desired columns
    • can use ps to list a process by it’s ownership or owning group.
  • Output with -ef
    • UID
      • UID of process owner
    • PID
      • Process ID
    • PPID
      • Parent Process ID
    • C
      • CPU utilization
    • STIME
      • Start time
    • TTY
      • Controlling terminal
      • ?
        • daemon process
      • console
        • system console
    • TIME
      • Aggregated execution time
    • CMD
      • command or program name
  • Flags
    • -e
      • every
    • -f
      • full format
    • -F
      • Extra full format
    • -l
      • long format
    • -efl
      • Detailed process report
    • –forest
      • tree like hierarchy
    • -x
      • include daemon processes
    • -o
      • user-defined format
      • Make sure there are no white spaces between comma separated values.
    • -C
      • command list
      • list processes that match a specific command name.
    • -U or -u
      • List user supplied as argument.
    • -G or -g
      • List processes owned by a specific group

top command

  • Display processes in real time
  • q or ctrl+c to quit
  • Hotkeys while in top
    • o
      • re-sequence the process list.
    • f
      • add or remove fields
    • F
      • select the field to sort on
    • h
      • help
  • summary portion
    • First 5 lines
      • 1
        • system uptime, number of users logged in, and system load averages over the period of 1, 5, and 15 minutes.
      • 2
        • task (or process) information
        • total number of tasks running
        • How many of the total are running, sleeping, stopped, and zombie
      • 3
        • processor usage
        • CPU time in percentage spent in running user and system processes, in idling and waiting, and so on.
      • 4
        • memory utilization
          • total, free, used, and allocated for buffering and caching
      • 5
        • swap useage
          • total, free, and in use
        • avail Mem
          • estimate of memory available for starting processes without using swap.
  • tasks portion
    • details for each process
    • 12 columns
      • 1 and 2
        • Process identifier (PID) and owner (USER)
      • 3 and 4
        • Process priority (PR) and nice value (NI)
      • 5 and 6
        • Depict amounts of virtual memory (VIRT) and non-swapped resident memory (RES) in use
      • 7
        • Shows the amount of shareable memory available to the process (SHR)
      • 8
        • Represents the process status (S)
      • 9 and 10
        • Express the CPU (%CPU) and memory (%MEM) utilization
      • 11
        • Exhibits the CPU time in hundredths of a second (TIME+)
      • 12
        • Identifies the process name (COMMAND)

Listing a Specific Process

pidof and pgrep command

  • List only the PID of a specific process
  • pass a process name as an argument to view its PID
  • identical if used without any options

Listing Processes by User and Group Ownership

  • can use ps to list a process by it’s ownership or owning group.

Process Niceness and Priority

  • A process is spawned at a certain priority,
  • priority is established based on the nice value.
  • Higher niceness lowers execution priority of a process
  • Lower niceness increase priority.
  • Child process inherits nice value of it’s calling process.
  • Can choose a nicenes based on urgency, importance, or system load.
  • Normal users can only increase niceness of their processes.
  • Root can raise or lower niceness of any process.
  • 40 nice values
    • -20
      • highest and most favorable
    • +19
      • lowest and least favorable
    • 0
      • default
  • Showing nice and priority with ps
    • niceness of 0 corresponds to priority of 80
    • -20 corresponds to priority of 60
  • Showing nice and priority with top.
    • niceness of 0 corresponds to priority of 20
    • -20 corresponds to priority of 0

nice command

  • Launch a program at a non-default priority.

renice command

  • Alter the priority of a running program

Controlling Processes with Signals

  • terminating the process gracefully
  • killing it abruptly
  • forcing it to re-read its configuration.
  • Ordinary users can kill processes that they own, while the root user privilege is needed to kill any process on the system.
  • Processes in a waiting state ignore the soft termination signal.

kill command

  • Pass a signal to a process
  • Requires one or more PIDs

Flags

  • -l
    • view a list of signals

Common signals - 1 SIGHUP (hangup) - causes a process to disconnect itself from a closed terminal that it was tied to - instruct a running daemon to re-read its configuration without a restart. - 2 SIGINT - ^c (Ctrl+c) signal issued on the controlling terminal to interrupt the execution of a process. - 9 SIGKILL - Terminates a process abruptly - 15 SIGTERM (default) - Soft termination signal to stop a process in an orderly fashion. - Default signal if none is specified with the command. - 18 SIGCONT - Same as using the bg command to resume - 19 SIGSTOP - Same as using Ctrl+z to suspend a job - 20 SIGTSTP - Same as using the fg command

pkill command

  • pass a signal to a process
  • requires one or more process names to send a signal to.

Job Scheduling

  • Run a command at a specified time.
  • One time or periodic.
  • One time command can be used to run a command at a time with low system usage.
  • Periodic examples:
    • creating a compressed archive
    • trimming log files
    • monitoring the system
    • running a custom script
    • removing unwanted files from the system.
  • atd and crond manage jobs

atd

  • Run one time jobs.
  • atd daemon retries a missed job at the same time next day.
  • Does not need a restart with changes

crond

  • Run periodic scheduled jobs.
  • Daemon reads the schedules in files located in the /var/spool/cron and /etc/cron.d directories.
    • scans these files in short intervals
    • updates the in-memory schedules to reflect any modifications.
    • runs a job at its scheduled time only
    • does not entertain any missed jobs.
    • Does not need a restart with changes

Controlling user access

  • all users can schedule jobs
  • access to job scheduling can be edited
    • must add users to allowed or deny file in /etc
      • /etc/at.allow & /etc/cron.allow
        • Does not exist by default.
      • /etc/at.deny & /etc/cron.deny
        • Exists by default
    • list one username per line
    • root user is always permitted
  • Denial message appears if unauthorized user attempts to use at or cron.
    • Only if there is an entry for the calling user in the deny files.
    at.allow / cron.allow at.deny / cron.deny Impact
    Exists, and contains user entries Existence does not matter All users listed in allow files are permitted
    Exists, but is empty Existence does not matter No users are permitted
    Does not exist Exists, and contains user entries All users, other than those listed in deny files, are permitted
    Does not exist Exists, but is empty All users are permitted
    Does not exist Does not exist No users are permitted

Scheduler Log File

/var/log/cron - Logs for both atd and cron Shows - time of activity - hostname - process name and PID - owner - message for each invocation - service start time and delays - must have root privileges to view

at command

  • schedule a one-time execution of a program in the future.
  • Submitted jobs are spooled in the /var/spool/at/ and executed by the atd daemon at the specified time.
  • file created containing the settings for establishing the user’s shell environment to ensure a successful execution.
    • also includes the name of the command or program to be run.
  • no need to restart the daemon after a job submission.
  • assumes the current year and today’s date if the year and date are not mentioned.
  • ways to express time:
    • at 1:15am
      • (executes the task at the next 1:15 a.m.)
    • at noon
      • (executes the task at 12:00 p.m.)
    • at 23:45
      • (executes the task at 11:45 p.m.)
    • at midnight
      • (executes the task at 12:00 a.m.)
    • at 17:05 tomorrow
      • (executes the task at 5:05 p.m. on the next day)
    • at now + 5 hours
      • (executes the task 5 hours from now. We can specify minutes, days, or weeks in place of hours)
    • at 3:00 10/15/20
      • (executes the task at 3:00 a.m. on October 15, 2020)
  • Flags
    • -f
      • supply a filename

Crontab

crontab command

  • other method for scheduling tasks for running in the future.
  • Unlike atd, crond executes cron jobs on a regular basis as defined in the /etc/crontab file.
  • Crontables (another name for crontab files) are located in the /var/spool/cron directory.
  • Each authorized user with a scheduled job has a file matching their login name in this directory.
    • such as /var/spool/cron/user1
  • /etc/crontab/ & /etc/cron.d/
    • Other locations for system crontables.
    • Only root can create, modify, or delete them.
  • crond daemon
    • scans entries in all 3 directories.
    • adds log entry to /var/log/cronfile
    • no need to start after modifying cron jobs.
  • flags
    • -e
      • edit crontables
    • -l
      • list crontables
    • -r
      • remove crontables.
      • Do not run crontab -r if you do not wish to remove the crontab file. Instead, edit the file with crontab -e and just erase the entry.
    • -u
      • modify a different user’s crontable
      • provided they are allowed to do so and the other user is listed in the cron.allow file.
      • root user can use the -u flag to alter other users’ crontables even if the affected users are not listed in the allow file.

Syntax of User Crontables

  • /etc/crontab
    • Specifies the syntax that each user cron job must comply with in order for crond to interpret and execute it successfully.
  • Each entry for a user crontable has 6 lines
    • 1-5
      • schedule
    • 6
      • login name of executing user
    • rest for command or program to be executed. example crontable line
    • 20 1,12 1-15 feb * ls> /tmp/ls.out
  • Field Content Description
  • 1
    • Minute of the hour
    • Valid values are 0 (the exact hour) to 59. This field can have one specific value as in field 1, multiple comma-separated values as in field 2, a range of values as in field 3, a mix of fields 2 and 3 (1-5,6-19), or an * representing every minute of the hour as in field 5.
  • 2
    • Hour of the day
    • Valid values are 0 (midnight) to 23. Same usage applies as described for field 1.
  • 3
    • Day of the month
    • Valid values are 1 to 31. Same usage applies as described for field 1.
  • 4
    • Month of the year
    • Valid values are 1 to 12 or jan to dec. Same usage applies as described for field 1.
  • 5
    • Day of the week
    • Valid values are 0 to 7 or sun to sat, with 0 and 7 representing Sunday, 1 representing Monday, and so on. Same usage applies as described for field 1.
  • 6
    • Command or program to execute
    • Specifies the full path name of the command or program to be executed, along with any options or arguments that it requires.

/etc/crontab contents:

  • Step values may be used with * and ranges in the crontables using the forward slash character (/).
  • Step values allow the number of skips for a given value.
  • Example:
    • /2 in the minute field
      • every second minute
    • /3 in the minute field
      • every third minute,
    • 0-59/4 in the minute field
      • every 4th minute

Make sure you understand and memorize the order of the fields defined in crontables.

Anacron

  • service that runs after every system reboot
  • checks for any cron and at jobs that were scheduled for execution during the time the system was down and were missed as a result.
  • useful on laptop, desktop, and similar purpose systems with extended periods of frequent downtimes and are not intended for 24/7 operations.
  • Scans the /etc/cron.hourly/0anacron file for three factors to learn whether to run missed jobs.
  • May be run manually at the command line.
    • Run anacron to run all jobs in /etc/anacrontab that were missed.
  • /var/spool/anacron
    • Where anacron stores job execution dates
  • 3 factors must be true for anacron to execute scripts in /etc/cron.daily, /etc/cron.weekly, and /etc/cron.monthly
      1. Presence of the /var/spool/anacron/cron.daily file.
      1. Elapsed time of 24 hours since it was last run.
      1. System is plugged in to an AC source.
  • settings defined in /etc/anacrontab
    • 5 variables defined by default:
      • SHELL and PATH
        • Set the shell and path to be used for executing the programs.
      • MAILTO
        • Defines the login name or an email of the user who is to be sent any output and error messages.
      • RANDOM_DELAY
        • Expresses the maximum arbitrary delay in minutes added to the base delay of the jobs as defined in column 2 of the last three lines.
      • START_HOURS_RANGE
        • States the hour duration within which the missed jobs could be run.
    • Bottom 3 lines define the schedule and the programs to be executed:
      • Column 1:
        • Period in days (or @daily, @weekly, @monthly, or @yearly)
        • How often to run the specified job.
      • Column 2:
        • How many minutes to wait after system boot to execute the job.
      • Column 3:
        • Unique job identifier
      • Columns 4 to 6:
        • Command to be used to execute the scripts located under the /etc/cron.daily, /etc/cron.weekly, and /etc/cron.monthly directories.
        • By default, the run-parts command is invoked for execution at the default niceness.
    • For each job:
      • Examines whether the job was already run during the specified period (column 1).
      • Executes it after waiting for the number of minutes (column 2) plus the RANDOM_DELAY value if it wasn’t.
      • When all missed jobs have been carried out and there is none pending, Anacron exits.

Process and Task Scheduling Labs

Lab: ps

  1. ps
ps
  1. Check manual pages:
man ps
  1. Run with “every” and “full format” flags:
 ps -ef
  1. Produce an output with the command name in column 1, PID in column 2, PPID in column 3, and owner name in column 4, run it as follows:
 ps -o comm,pid,ppid,user
  1. Check how many sshd processes are currently running on the system:
 ps -C sshd

Lab: top

  1. top
top
  1. View manual page:
man top

Lab: List a specific process

  1. list the PID of the rsyslogd daemon
pidof rsyslogd
or
pgrep rsyslogd

Lab: Listing Processes by User and Group Ownership

  1. List processes owned by user1:
ps -U user1
  1. List processes owned by group root:
ps -G root

Lab: nice

  1. View the default nice value:
nice
  1. List priority and niceness for all processes:
ps -efl

Lab: Start Processes at Non-Default Priorities (2 terminals)

  1. Run the top command at the default priority/niceness in Terminal 1:
top
  1. Check the priority and niceness for the top command in Terminal 2 using the ps command:
ps -efl | grep top
  1. Terminate the top session in Terminal 1 by pressing the letter q and relaunch it at a lower priority with a nice value of +2:
nice -n 2 top
  1. \Check the priority and niceness for the top command in Terminal 2 using the ps command:
ps -efl | grep top
  1. Terminate the top session in Terminal 1 by pressing the letter q and relaunch it at a higher priority with a nice value of -10. Use sudo for root privileges.
sudo nice -n -10 top
  1. Check the priority and niceness for the top command in Terminal 2 using the ps command:
ps -efl | grep top
  1. Terminate the top session by pressing the letter q.

Lab: Alter Process Priorities (2 terminals)

  1. Run the top command at the default priority/niceness in Terminal 1:
top
  1. Check the priority and niceness for the top command in Terminal 2 using the ps command:
ps -efl | grep top
  1. While the top session is running in Terminal 1, increase its priority by renicing it to -5. Use the command substitution to get the PID of top. Prepend the renice command by sudo. The output indicates the old (0) and new (-5) priorities for the process.
sudo renice -n -5 $(pidof top)
  1. Validate the above change with ps. Focus on columns 7 and 8.
ps -efl | grep top
  1. Repeat the above but set the process to run at a lower priority by renicing it to 8: The output indicates the old (-5) and new (8) priorities for the process.
sudo renice -n 8 $(pidof top)
  1. Validate the above change with ps. Focus on columns 7 and 8.
ps -efl | grep top

Lab: Controlling Processes with Signals

  1. Pass the soft termination signal to the crond daemon, use either of the following:
sudo pkill crond
# or
sudo kill $(pidof crond)
  1. Confirm:
ps -ef | grep crond
  1. Forcefully kill crond:
sudo pkill -9 crond
# or
sudo pkill -s SIGKILL crond
# or
sudo kill -9 $(pgrep crond)
  1. Kill all crond processes:
sudo killall crond
  1. View manual pages:
man kill
man pkill
man killall

Lab: cron and atd

  1. View log files for cron and atd
sudo cat /var/log/cron

Lab: at and crond

  1. run /home/user1/.bash_profile file for user1 2 hours from now:
at -f ~/.bash_profile now + 2 hours
  1. Consult crontab manual pages:
man crontab

Lab: Submit, View, List, and Erase an at Job

1.Run the at command and specify the correct execution time and date for the job. Type the entire command at the first at> prompt and press Enter. Press Ctrl+d at the second at> prompt to complete the job submission and return to the shell prompt.

at 1:30pm 3/31/20
date &> /tmp/date.out

The system assigned job ID 5 to it, and the output also pinpoints the job’s execution time.

2.List the job file created in the /var/spool/at directory:

sudo ls -l /var/spool/at/

3.List the spooled job with the at command. You may alternatively use atq to list it.

at -l
# or
atq

4.Display the contents of this file with the at command and specify the job ID:

at -c 5

5.Remove the spooled job with the at command by specifying its job ID. You may alternatively run atrm 5 to delete it.

at -d 5

This should erase the job file from the /var/spool/at directory. You can

  1. confirm the deletion by running atq or at -l.
atq

Lab: Add, List, and Erase a Cron Job

assume that all users are currently denied access to cron

  1. Edit the /etc/cron.allow file and add user1 to it:
sudo vim /etc/cron.allow
user1
  1. Switch to user1 Open the crontable and append the following schedule to it. Save the file when done and exit out of the editor.
crontab -e
*/5 10-11 5,20 * * echo "Hello, this is a cron test." > /tmp/hello.out
  1. Check for the presence of a new file by the name user1 under the /var/spool/cron directory:
sudo ls -l /var/spool/cron
  1. List the contents of the crontable:
crontab -l
  1. Remove the crontable and confirm the deletion:
crontab -r
crontab -l

Lab: Anacron

  1. View the default content of /etc/anacrontab without commented or empty lines:
cat /etc/anacrontab | grep -ve ^# -ve ^$
  1. View anacron man pages:
man anacron

Lab 8-1: Nice and Renice a Process

  1. As user1 with sudo on server1, open two terminal sessions. Run the top command in terminal 1. Run the pgrep or ps command in terminal 2 to determine the PID and the nice value of top.
ps -efl | grep top
  1. Stop top on terminal 1 and relaunch at a lower priority (+8).
nice -n 8 top
  1. Confirm the new nice value of the process in terminal 2.
ps -efl | grep top
  1. Issue the renice command in terminal 2 and increase the priority of top to -10:
renice -n -10 $(pidof top)
  1. Confirm:
ps -efl | grep top

Lab 8-2: Configure a User Crontab File

As user1 on server1, run the tty and date commands to determine the terminal file (assume /dev/pts/1) and current system time.

tty
date

Create a cron entry to display “Hello World” on the terminal. Schedule echo “Hello World” > /dev/tty/1 to run 3 minutes from the current system time.

crontab -e
*/3 * * * * echo "Hello World" > /dev/pts/2

As root, ensure user1 can schedule cron jobs.

sudo vim /etc/cron.allow
user1

Tools

Subsections of Tools

Calibre Web with Docker and NGINX

I couldn’t find a guide on how to set up Calibre web step-by-step as a Docker container. Especially not one that used Nginx as a reverse proxy.

The good news is that it is really fast and simple. You’ll need a few tools to get this done:

  • A server with a public IP address
  • A DNS Provider (I use CloudFlare)
  • Docker
  • Nginx
  • A Calibre Library
  • Certbot
  • Rsync

First, sync your local Calibre library to a folder on your server:

rsync -avuP your-library-dir root@example.org:/opt/calibre/

Install Docker

sudo apt update  
sudo apt install docker.io

Create a Docker network

sudo docker network create calibre_network

Create a Docker volume to store Calibre Web data

sudo docker volume create calibre_data

Pull the Calibre Web Docker image

sudo docker pull linuxserver/calibre-web

Start the Calibre Web Docker container

sudo docker run -d \   
--name=calibre-web \   
--restart=unless-stopped \   
-p 8083:8083 \   
-e PUID=$(id -u) \   
-e PGID=$(id -g) \   
-v calibre_data:/config \   
-v /opt/calibre/Calibre:/books \   
--network calibre_network \   
linuxserver/calibre-web

Configure Nginx to act as a reverse proxy for Calibre Web

Create the site file

sudo vim /etc/nginx/sites-available/calibre-web

Add the following to the file

server { listen 80;   
server_name example.com; # Replace with your domain or server IP location /   
{   
proxy_pass http://localhost:8083;   
proxy_set_header Host $host;   
proxy_set_header X-Real-IP $remote_addr;   
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;   
proxy_set_header X-Forwarded-Proto $scheme;   
} }

Enable the site

sudo ln -s /etc/nginx/sites-available/calibre-web /etc/nginx/sites-enabled/

Restart Nginx

sudo service nginx restart

DNS CNAME Record

Make sure to set up a cname record for your site with your DNS provider such as: calibre.example.com

SSL Certificate

Install ssl cert using certbot

certbot --nginx

Site Setup

Head to the site at https://calibre.example.com and log in with default credentials:

username: admin password: admin123

Select /books as the library directory. Go into admin settings and change your password.

Adding new books

Whenever you add new books to your server via the rsync command from earlier, you will need to restart the Calibre Web Docker container. Then restart Nginx.

sudo docker restart calibre-web  
systemctl restart nginx

That’s all there is to it. Feel free to reach out if you have issues.

How to Build a website With Hugo

Word Press is great, but it is probably a lot more bloated then you need for a personal website. Enter Hugo, it has less server capacity and storage needs than Word Press. Hugo is a static site generator than takes markdown files and converts them to html.

Hosting your own website is also a lot cheaper than having a provider like Bluehost do it for you. Instead of $15 per month, I am currently paying $10 per year.

This guide will walk through building a website step-by-step.

  1. Setting up a Virtual Private Server (VPS)
  2. Registering a domain name
  3. Pointing the domain to your server
  4. Setting up hugo on your local PC
  5. Syncing your Hugo generate site with your server
  6. Using nginx to serve your site
  7. Enable http over SSL

Setting up a Virtual Private Server (VPS)

I use Vultr as my VPS. When I signed up they had a $250 credit towards a new account. If you select the cheapest server (you shouldn’t need anything else for a basic site) that comes out to about $6 a month. Of course the $250 credit goes towards that which equates to around 41 months free.

Head to vultr.com. Create and account and Select the Cloud Compute option.

Under CPU & Storage Technology, select “Regular Performance”. Then under “Server Location, select the server closest to you. Or closest to where you think your main audience will be.

Under Server image, select the OS you are most comfortable with. This guide uses Debian.

Under Server Size, slect the 10GB SSD. Do not select the “IPv6 ONLY” option. Leave the other options as default and enter your server hostname.

On the products page, click your new server. You can find your server credentials and IPv4 address here. You will need these to log in to your server.

Log into your sever via ssh to test. From a Linux terminal run:

ssh username@serveripaddress

Then, enter your password when prompted.

Registering a Domain Name

I got my domain perfectdarkmode.com from Cloudflare.com for about $10 per year. You can check to see available domains there. You can also check https://www.namecheckr.com/ to see iof that name is available on various social media sites.

In CloudFlare, just click “add a site” and pick a domain that works for you. Next, you will need your server address from earlier.

Under domain Registration, click “Manage Domains”, click “manage” on your domain. One the sidebar to the right, there is a qucik actions menu. Click “update DNS configuration”.

Click “Add record”. Type is an “A” record. Enter the name and the ip address that you used earlier for your server. Uncheck “Proxy Status” and save.

You can check to see if your DNS has updated on various DNS severs at https://dnschecker.org/. Once those are up to date (after a couple minutes) you should be able to ping your new domain.

$ ping perfectdarkmode.com
PING perfectdarkmode.com (104.238.140.131) 56(84) bytes of data.
64 bytes from 104.238.140.131.vultrusercontent.com (104.238.140.131): icmp_seq=1 ttl=53 time=33.2 ms
64 bytes from 104.238.140.131.vultrusercontent.com (104.238.140.131): icmp_seq=2 ttl=53 time=28.2 ms
64 bytes from 104.238.140.131.vultrusercontent.com (104.238.140.131): icmp_seq=3 ttl=53 time=31.0 ms

Now, you can use the same ssh command to ssh into your vultr serverusing your domain name.

ssh username@domain.com

Setting up hugo on your local PC

Hugo is a popular open-source static site generator. It allows you to take markdown files, and builds them into and html website. To start go to https://gohugo.io/installation/ and download Hugo on your local computer. (I will show you how to upload the site to your server later.)

Pick a theme The theme I use is here https://themes.gohugo.io/themes/hugo-theme-hello-friend-ng/

You can browse your own themes as well. Just make sure to follow the installation instructions. Let’s create a new Hugo site. Change into the directory where you want your site to be located in. Mine rests in ~/Documents/.

cd ~/Documents/

Create your new Hugo site.

hugo new site site-name

This will make a new folder with your site name in the ~/Documents directory. This folder will have a few directories and a config file in it.

archetypes  config.toml  content  data  layouts  public  resources  static  themes

For this tutorial, we will be working with the config.toml file and the content, public, static, and themes. Next, load the theme into your site directory. For the Hello Friend NG theme:

git clone https://github.com/rhazdon/hugo-theme-hello-friend-ng.git themes/hello-friend-ng

Now we will load the example site into our working site. Say yes to overwrite.

cp -a themes/hello-friend-ng/exampleSite/* .

The top of your new config.toml site now contains:

baseURL = "https://example.com"
title   = "Hello Friend NG"
languageCode = "en-us"
theme = "hello-friend-ng"

Replace your baseURL with your site name and give your site a title. Set the enableGlobalLanguageMenu option to false if you want to remove the language swithcer option at the top. I also set enableThemeToggle to true so users could set the theme to dark or light.

You can also fill in the links to your social handles. Comment out any lines you don’t want with a “#” like so:

[params.social](params.social)
    name = "twitter"
    url  = "https://twitter.com/"

  [params.social](params.social)
    name = "email"
    url  = "mailto:nobody@example.com"

  [params.social](params.social)
    name = "github"
    url  = "https://github.com/"

  [params.social](params.social)
    name = "linkedin"
    url  = "https://www.linkedin.com/"

 # [params.social](params.social)
   # name = "stackoverflow"
   # url  = "https://www.stackoverflow.com/"

You may also want to edit the footer text to your liking. I commented out the second line that comes with the example site:

[params.footer]
    trademark = true
    rss = true
    copyright = true
    author = true

    topText = []
    bottomText = [
     # "Powered by <a href=\"http://gohugo.io\">Hugo</a>",
     #  "Made with &#10084; by <a href=\"https://github.com/rhazdon\">Djordje Atlialp</a>"
    ]

Now, move the contents of the example contents folder over to your site’s contents folder (giggidy):

cp -r ~/Documents/hugo/themes/hello-friend-ng/exampleSite/content/* ~/Documents/hugo/content/

Let’s clean up a little bit. Cd into ~/Documents/hugo/content/posts. Rename the file to the name of your first post. Also, delete all of the other files here:

cd ~/Documents/hugo/contents/posts
mv goisforlovers.md newpostnamehere.md
find . ! -name 'newpostnamehere.md' -type f -exec rm -f {} +

Open the new post file and delete everything after this:

+++
title = "Building a Minimalist Website with Hugo"
description = ""
type = ["posts","post"]
tags = [
    "hugo",
    "nginx",
    "ssl",
    "http",
    "vultr",
]
date = "2023-03-26"
categories = [
    "tools",
    "linux",
]
series = ["tools"]
[ author ]
  name = "David Thomas"
+++

You will need to fill out this header information for each new post you make. This will allow you to give your site a title, tags, date, categories, etc. This is what is called a TOML header. TOML stands for Tom’s Obvious Minimal Language. Which is a minimal language used for parsing data. Hugo uses TOML to fill out your site.

Save your doc and exit. Next, there should be an about.md page now in your ~/Documents/hugo/Contents folder. Edit this to edit your about page for your site. You can use this Markdown Guide if you need help learning markdown language. https://www.markdownguide.org/

Serve your website locally

Let’s test the website by serving it locally and accessing it at localhost:1313 in your web browser. Enter the command:

hugo serve

Hugo will now be generating your website. You can view it by entering localhost:1313 in your webbrowser.

You can use this to test new changes before uploading them to your server. When you svae a post or page file such as your about page, hugo will automatically update the changes to this local page if the local server is running.

Press “Ctrl + c” to stop this local server. This is only for testing and does not need to be running to make your site work.

Build out your public directory

Okay, your website is working locally, how do we get it to your server to host it online? We are almost there. First, we will use the hugo command to build your website in the public folder. Then, we will make a copy of our public folder on our server using rsync. I will also show you how to create an alias so you do not have to remember the rsync command every time.

From your hugo site folder run:

hugo

Next, we will put your public hugo folder into /var/www/ on your server. Here is how to do that with an alias. Open ~/.bashrc.

vim ~/.bashrc

Add the following line to the end of the file, making sure to replace the username and server name:

# My custom aliases
alias rsyncp='rsync -rtvzP ~/Documents/hugo/public/ username@myserver.com:/var/www/public'

Save and exit the file. Then tell bash to update it’s source config file.

source ~/.bashrc

Now your can run the command by just using the new alias any time. Your will need to do this every time you update your site locally.

rsyncp

Set up nginx on your server

Install nginx

apt update
apt upgrade
apt install nginx

create an nginx config file in /etc/nginx/sites-available/

vim /etc/nginx/sites-available/public

You will need to add the following to the file, update the options, then save and exit:

server {
        listen 80 ;
        listen [::]:80 ;
        server_name example.org ;
        root /var/www/mysite ;
        index index.html index.htm index.nginx-debian.html ;
        location / {
                try_files $uri $uri/ =404 ;
        }
}

Enter your domain in “server_name” line in place of “example.org”. Also, point “root” to your new site file from earlier. (/var/www/public). Then save and exit.

Link this site-available config file to sites-enabled to enable it. Then restart nginx:

ln -s /etc/nginx/sites-available/public /etc/nginx/sites-enabled
systemctl reload nginx

Access Permissions

We will need to make sure nginx has permissions to your site folder so that it can access them to serve your site. Run:

chmod 777 /var/www/public

Firewall Permissions

You will need to make sure your firewall allows port 80 and 443. Vultr installs the ufw program by default. But your can install it if you used a different provider. Beware, enabling a firewalll could block you from accessing your vm, so do your research before tinkering outside of these instructions.

ufw allow 80
ufw allow 443

Nginx Security

We will want to hide your nginx version number on error pages. This will make your site a bit harder for hackers to find exploits. Open your Nginx config file at /etc/nginx/nginx.conf and remove the “#” before “server_tokens off;”

Enter your domain into your browser. Congrats! You now have a running website!

Use Certbot to enable HTTPS

Right now, our site uses the unencrypted http. We want it to use the encrypted version HTTPS (HTTP over SSL). This will increase user privacy, hide usernames and passwords used on your site, and you get the lock symbol by your URL name instead of “!not secure”.

Install Certbot and It’s Nginx Module

apt install python3-certbot-nginx

Run certbot

certbot --nginx

Fill out the information, certbot asks for your emaill so it can send you a reminder when the certs need to be renewed every 3 months. You do not need to consent to giving your email to the EFF. Press 1 to select your domain. And 2 too redirect all connections to HTTPS.

Certbot will build out some information in your site’s config file. Refresh your site. You should see your new fancy lock icon.

Set Up a Cronjob to automatically Renew certbot certs

crontab -e

Select a text editor and add this line to the end of the file. Then save and exit the file:

0 0 1 * * certbot --nginx renew

You now have a running website. Just make new posts locally, the run “hugo” to rebuild the site. And use the rsync alias to update the folder on your server. I will soon be making tutorials on making an email address for your domain, such as david@perfectdarkmode.com on my site. I will also be adding a comments section, RSS feed, email subscription, sidebar, and more.

Feel free to reach out with any questions if you get stuck. This is meant to be an all encompassing guide. So I want it to work.

Extras

Optimizing images

Create assets folder in main directory.

Create images folder in /assets

Access image using hugo pipes

{{ $image := resources.Get "images/test-image.jpg" }}
<img src="{{ ( $image.Resize "500x" ).RelPermalink }}" />

https://gohugo.io/content-management/image-processing/

How to Process Bookfusion Highlights with Vim

Here are my highlights pulled up in Vim:

As you can see, Bookfusion gives you a lot of extra information when you export highlights. First, let’s get rid of the lines that begin with ##

Enter command mode in Vim by pressing esc. Then type :g/^##/d and press enter.

Much better.

Now let’s get rid of the color references:`

:g/^Color/d

To get rid of the timestamps, we must find a different commonality between the lines. In this case, each line ends with “UTC”. Let’s match that:

:g/UTC$/d

Where $ matches the end of the line.

Now, I want to get rid of the > on each line: %s/> //g

Almost there, you’ll notice there are 6 empty lines in between each highlight. Let’s shrink those down into one:

:%s/\(\n\)\{3,}/\r\r/g

The command above matches newline character n 3 or more times and replaces them with two newline characters /r/r.

As we scroll down, I see a few weird artifacts from the book conversion to markdown.

Now, I want to get rid of any carrot brackets in the file. Let’s use the substitute command again here:

%s/<//g

Depending on your book and formatting. You may have some other stuff to edit.

How to Set Up Hugo Relearn Theme

Hugo Setup

Adding a module as a theme

Make sure Go is installed

go version

Create a new site

hugo new site sitename
cd sitename

Initialize your site as a module

hugo mod init sitename

Confirm

cat go.mod

Add the module as a dependency using it’s git link

hugo mod get github.com/McShelby/hugo-theme-relearn

Confirm

cat go.mod

add the theme to config.toml

# add this line to config.toml and save
theme = ["github.com/McShelby/hugo-theme-relearn"]

Confirm by viewing site

hugo serve
# visit browser at http://localhost:1313/ to view site

Adding a new “chapter” page

hugo new --kind chapter Chapter/_index.md

Add a home page

hugo new --kind home _index.md

Add a default page

hugo new <chapter>/<name>/_index.md

or

hugo new <chapter>/<name>.md

You will need to change some options in _index.md

+++
# is this a "chaper"?
chapter=true
archetype = "chapter"
# page title name
title = "Linux"
# The "chapter" number
weight = 1
+++

Adding a “content page” under a category

hugo new basics/first-content.md

Create a sub directory:

hugo new basics/second-content/_index.md
  • change draft = true to draft = false in the content page to make a page render.

Global site parameters

Add these to your config.toml file and edit as you please

[params]
  # This controls whether submenus will be expanded (true), or collapsed (false) in the
  # menu; if no setting is given, the first menu level is set to false, all others to true;
  # this can be overridden in the pages frontmatter
  alwaysopen = true
  # Prefix URL to edit current page. Will display an "Edit" button on top right hand corner of every page.
  # Useful to give opportunity to people to create merge request for your doc.
  # See the config.toml file from this documentation site to have an example.
  editURL = ""
  # Author of the site, will be used in meta information
  author = ""
  # Description of the site, will be used in meta information
  description = ""
  # Shows a checkmark for visited pages on the menu
  showVisitedLinks = false
  # Disable search function. It will hide search bar
  disableSearch = false
  # Disable search in hidden pages, otherwise they will be shown in search box
  disableSearchHiddenPages = false
  # Disables hidden pages from showing up in the sitemap and on Google (et all), otherwise they may be indexed by search engines
  disableSeoHiddenPages = false
  # Disables hidden pages from showing up on the tags page although the tag term will be displayed even if all pages are hidden
  disableTagHiddenPages = false
  # Javascript and CSS cache are automatically busted when new version of site is generated.
  # Set this to true to disable this behavior (some proxies don't handle well this optimization)
  disableAssetsBusting = false
  # Set this to true if you want to disable generation for generator version meta tags of hugo and the theme;
  # don't forget to also set Hugo's disableHugoGeneratorInject=true, otherwise it will generate a meta tag into your home page
  disableGeneratorVersion = false
  # Set this to true to disable copy-to-clipboard button for inline code.
  disableInlineCopyToClipBoard = false
  # A title for shortcuts in menu is set by default. Set this to true to disable it.
  disableShortcutsTitle = false
  # If set to false, a Home button will appear below the search bar on the menu.
  # It is redirecting to the landing page of the current language if specified. (Default is "/")
  disableLandingPageButton = true
  # When using mulitlingual website, disable the switch language button.
  disableLanguageSwitchingButton = false
  # Hide breadcrumbs in the header and only show the current page title
  disableBreadcrumb = true
  # If set to true, hide table of contents menu in the header of all pages
  disableToc = false
  # If set to false, load the MathJax module on every page regardless if a MathJax shortcode is present
  disableMathJax = false
  # Specifies the remote location of the MathJax js
  customMathJaxURL = "https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js"
  # Initialization parameter for MathJax, see MathJax documentation
  mathJaxInitialize = "{}"
  # If set to false, load the Mermaid module on every page regardless if a Mermaid shortcode or Mermaid codefence is present
  disableMermaid = false
  # Specifies the remote location of the Mermaid js
  customMermaidURL = "https://unpkg.com/mermaid/dist/mermaid.min.js"
  # Initialization parameter for Mermaid, see Mermaid documentation
  mermaidInitialize = "{ \"theme\": \"default\" }"
  # If set to false, load the Swagger module on every page regardless if a Swagger shortcode is present
  disableSwagger = false
  # Specifies the remote location of the RapiDoc js
  customSwaggerURL = "https://unpkg.com/rapidoc/dist/rapidoc-min.js"
  # Initialization parameter for Swagger, see RapiDoc documentation
  swaggerInitialize = "{ \"theme\": \"light\" }"
  # Hide Next and Previous page buttons normally displayed full height beside content
  disableNextPrev = true
  # Order sections in menu by "weight" or "title". Default to "weight";
  # this can be overridden in the pages frontmatter
  ordersectionsby = "weight"
  # Change default color scheme with a variant one. Eg. can be "auto", "red", "blue", "green" or an array like [ "blue", "green" ].
  themeVariant = "auto"
  # Change the title separator. Default to "::".
  titleSeparator = "-"
  # If set to true, the menu in the sidebar will be displayed in a collapsible tree view. Although the functionality works with old browsers (IE11), the display of the expander icons is limited to modern browsers
  collapsibleMenu = false
  # If a single page can contain content in multiple languages, add those here
  additionalContentLanguage = [ "en" ]
  # If set to true, no index.html will be appended to prettyURLs; this will cause pages not
  # to be servable from the file system
  disableExplicitIndexURLs = false
  # For external links you can define how they are opened in your browser; this setting will only be applied to the content area but not the shortcut menu
  externalLinkTarget = "_blank"

Syntax highlighting

Supports a variety of [Code Syntaxes] To select the syntax, wrap the code in backticks and place the syntax by the first set of backticks.

```bash
echo hello
\```

Adding tags

Tags are displayed in order at the top of the page. They will also display using the menu shortcut made further down.

Add tags to a page:

+++
tags = ["tutorial", "theme"]
title = "Theme tutorial"
weight = 15
+++

Choose a default color theme

Add to config.toml with the chosen theme for the “style” option:

[markup]
  [markup.highlight]
    # if `guessSyntax = true`, there will be no unstyled code even if no language
    # was given BUT Mermaid and Math codefences will not work anymore! So this is a
    # mandatory setting for your site if you want to use Mermaid or Math codefences
    guessSyntax = false

    # choose a color theme or create your own
    style = "base16-snazzy"

Add Print option and search output page.

add the following to config.toml

[outputs]
  home = ["HTML", "RSS", "PRINT", "SEARCH"]
  section = ["HTML", "RSS", "PRINT"]
  page = ["HTML", "RSS", "PRINT"]

Customization

This theme has a bunch of editable customizations called partials. You can overwrite the default partials by putting new ones in /layouts/partials/.

to customize “partials”, create a “partials” directory under site/layouts/

cd layouts
mkdir partials
cd partials

You can find all of the partials available for this theme here

Change the site logo using the logo.html partial

Create logo.html in /layouts/partials

vim logo.html

Add the content you want in html. This can be an img html tag referencing an image in the static folder. Or even basic text. Here is the basic syntax of an html page, adding “Perfect Dark Mode” as the text to display:

<!DOCTYPE html>
<html>
<body>

<h3>Perfect Dark Mode</h3>

</body>
</html>

Add a favicon to your site

  • This is pasted from the relearn site. Add Favicon and edit * If your favicon is a SVG, PNG or ICO, just drop off your image in your local static/images/ folder and name it favicon.svgfavicon.png or favicon.ico respectively.

If no favicon file is found, the theme will lookup the alternative filename logo in the same location and will repeat the search for the list of supported file types.

If you need to change this default behavior, create a new file in layouts/partials/ named favicon.html. Then write something like this:

<link rel="icon" href="/images/favicon.bmp" type="image/bmp">

Changing theme colors

In your config.toml file edit the themeVariant option under [params]

  themeVariant = "relearn-dark"

There are some options to choose from or you can custom make your theme colors by using this stylesheet generator

Menu Shortcuts Add a [[menu.shortcuts]] entry for each link

[[menu.shortcuts]]
name = "<i class='fab fa-fw fa-github'></i> GitHub repo"
identifier = "ds"
url = "https://github.com/McShelby/hugo-theme-relearn"
weight = 10

[[menu.shortcuts]]
name = "<i class='fas fa-fw fa-camera'></i> Showcases"
url = "more/showcase/"
weight = 11

[[menu.shortcuts]]
name = "<i class='fas fa-fw fa-bookmark'></i> Hugo Documentation"
identifier = "hugodoc"
url = "https://gohugo.io/"
weight = 20

[[menu.shortcuts]]
name = "<i class='fas fa-fw fa-bullhorn'></i> Credits"
url = "more/credits/"
weight = 30

[[menu.shortcuts]]
name = "<i class='fas fa-fw fa-tags'></i> Tags"
url = "tags/"
weight = 40

Extras

Menu button arrows. (Add to page frontmatter)

menuPre = "<i class='fa-fw fas fa-caret-right'></i> "

Nextcloud on RHEL Based Systems

I’m going to show you how to set up your own, self-hosted Nextcloud server using Alma Linux 9 and Apache.

What is Nextcloud?

Nextcloud is so many things. It offers so many features and options, it deserves a bulleted list:

  • Free and open source
  • Cloud storage and syncing
  • Email client
  • Custom browser dashboard with widgets
  • Office suite
  • RSS newsfeed
  • Project organization (deck)
  • Notebook
  • Calender
  • Task manager
  • Connect to decentralized social media (like Mastodon)
  • Replacement for all of google’s services
  • Create web forms or surveys

It is also free and open source. This mean the source code is available to all. And hosting yourself means you can guarantee that your data isn’t being shared.

As you can see. Nextcloud is feature packed and offers an all in one solution for many needs. The set up is fairly simple.

You will need:

  • Domain hosted through CloudFlare or other hosting.
  • Server with Alma Linux 9 with a dedicated public ip address.

Nextcloud dependencies:

  • PHP 8.3
  • Apache
  • sql database (This tutorial uses MariaDB)

Official docs: https://docs.nextcloud.com/server/latest/admin_manual/installation/source_installation.html

Server Specs

Hard drives: 120g main 500g data 250g backup

OS: Alma 9 CPU: 4 sockets 8 cores RAM: 32768

ip 10.0.10.56/24 root: { password } davidt: { password }

Storage setup

mkdir /var/www/nextcloud/ -p
mkdir /home/databkup
parted /dev/sdb mklabel gpt
parted /dev/sdb mkpart primary 0% 100%
parted /dev/sdc mklabel gpt
parted /dev/sdc mkpart primary 0% 100%
mkfs.xfs /dev/sdb1
mkfs.xfs /dev/sdc1
lsblk
blkid /dev/sdb1 >> /etc/fstab
blkid /dev/sdc1 >> /etc/fstab
vim /etc/fstab
mount -a
[root@dt-lab2 ~]# lsblk
NAME               MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
sda                  8:0    0  120G  0 disk 
├─sda1               8:1    0    1G  0 part /boot
└─sda2               8:2    0  119G  0 part 
  ├─almalinux-root 253:0    0   70G  0 lvm  /
  ├─almalinux-swap 253:1    0   12G  0 lvm  [SWAP]
  └─almalinux-home 253:2    0   37G  0 lvm  /home
sdb                  8:16   0  500G  0 disk 
└─sdb1               8:17   0  500G  0 part /var/www/nextcloud
sdc                  8:32   0  250G  0 disk 
└─sdc1               8:33   0  250G  0 part /home/databkup

Setting up dependencies

Install latest supported PHP

I used this guide to help get a supported php version. As php 2 installed from dnf repos by default: https://orcacore.com/php83-installation-almalinux9-rockylinux9/

Make sure dnf is up to date:

sudo dnf update -y
sudo dnf upgrade -y

Set up the epel repository:

sudo dnf install epel-release -y

Set up remi to manage php modules:

sudo dnf install -y dnf-utils http://rpms.remirepo.net/enterprise/remi-release-9.rpm
sudo dnf update -y

Remove old versions of php:

sudo dnf remove php* -y

List available php streams:

sudo dnf module list reset php -y

Last metadata expiration check: 1:03:46 ago on Sun 29 Dec 2024 03:34:52 AM MST.
AlmaLinux 9 - AppStream
Name                Stream                      Profiles                                  Summary                             
php                 8.1                         common [d], devel, minimal                PHP scripting language              
php                 8.2                         common [d], devel, minimal                PHP scripting language              

Remi's Modular repository for Enterprise Linux 9 - x86_64
Name                Stream                      Profiles                                  Summary                             
php                 remi-7.4                    common [d], devel, minimal                PHP scripting language              
php                 remi-8.0                    common [d], devel, minimal                PHP scripting language              
php                 remi-8.1                    common [d], devel, minimal                PHP scripting language              
php                 remi-8.2                    common [d], devel, minimal                PHP scripting language              
php                 remi-8.3 [e]                common [d], devel, minimal                PHP scripting language              
php                 remi-8.4                    common [d], devel, minimal                PHP scripting language       

Enable the correct stream:

sudo dnf module enable php:remi-8.3

Now the default to install is version 8.3, install it like this:

sudo dnf install php -y
php -v

Let’s install git, as it’s also needed in this setup: sudo dnf -y install git

Install Composer for managing php modules:

cd && curl -sS https://getcomposer.org/installer | php
sudo mv composer.phar /usr/local/bin/composer

Install needed PHP modules: sudo dnf -y install php-process php-zip php-gd php-mysqlnd php-ldap php-imagick php-bcmath php-gmp php-intl

Upgrade php memory limit: sudo vim /etc/php.ini

memory_limit = 512M

Apache setup

Add Apache config for vhost: sudo vim /etc/httpd/conf.d/nextcloud.conf

<VirtualHost *:80>
  DocumentRoot /var/www/nextcloud/
  ServerName  cloud.{ site-name }.com

  <Directory /var/www/nextcloud/>
    Require all granted
    AllowOverride All
    Options FollowSymLinks MultiViews

    <IfModule mod_dav.c>
      Dav off
    </IfModule>
  </Directory>
</VirtualHost>

Set up the mysql database

Install: sudo dnf install mariadb-server -y

Enable the service: sudo systemctl enable --now mariadb

Nextcloud needs some tables setup in order to store information in a database. First set up a secure sql database:

sudo mysql_secure_installation

Say “Yes” to the prompts and enter root password:

Switch to unix_socket authentication [Y/n]: Y
Change the root password? [Y/n]: Y	# enter password.
Remove anonymous users? [Y/n]: Y
Disallow root login remotely? [Y/n]: Y
Remove test database and access to it? [Y/n]: Y
Reload privilege tables now? [Y/n]: Y

Sign in to your SQL database with the password you just chose:

mysql -u root -p

Create the database:

While signed in with the mysql command, enter the commands below one at a time. Make sure to replace the username and password. But leave localhost as is:

CREATE DATABASE nextcloud;
GRANT ALL ON nextcloud.* TO 'root'@'localhost' IDENTIFIED BY '{ password }';
FLUSH PRIVILEGES;
EXIT;

Nextcloud Install

Download nextcloud onto the server. Extract the contents to /var/www/nextcloud tar -xjf nextcloud-31.0.4.tar.bz2 -C /var/www/ --strip-components=1

Change the nextcloud folder ownership to apache and add permissions: sudo chmod -R 755 /var/www/nextcloud sudo chown -R apache:apache /var/www/nextcloud

Selinux:

sudo semanage fcontext -a -t httpd_sys_content_t "/var/www/nextcloud(/.*)?" && \
sudo semanage fcontext -a -t httpd_sys_rw_content_t "/var/www/nextcloud/(config|data|apps)(/.*)?" && \
sudo semanage fcontext -a -t httpd_sys_rw_content_t "/var/www/nextcloud/data(/.*)?"
sudo restorecon -Rv /var/www/nextcloud/

Now we can actually install Nextcloud. cd to the /var/www/nextcloud directory and run occ with these settings to install:

sudo -u apache php occ  maintenance:install \
--database='mysql' --database-name='nextcloud' \
--database-user='root' --database-pass='{ password }' \
--admin-user='admin' --admin-pass='{ password }'

Create a CNAME record for DNS.

Before you go any further, you will need to have a domain name set up for your server. I use Cloudflare to manage my DNS records. You will want to make a CNAME record for your nextcloud subdomain.

Just add “nextcloud” as the name and “yourwebsite.com” as the content. This will make it so “nextcloud.yourwebsite.com” is the site for your nextcloud dashboard. Also, make sure to select “DNS Only” under proxy status.

Here’s what my CloudFlare domain setup looks with this blog as the main site, and cloud.perfectdarkmode.com as the nextcloud site:

Then you need to update trusted domains in /var/www/nextcloud/config/config.php:

'trusted_domains' =>
   [
    'cloud.{ site-name }.com',
    'localhost'
  ],

Install Apache

Install: sudo dnf -y install httpd

Enable: systemctl enable --now httpd

Restart httpd systemctl restart httpd

Firewall rules:

sudo firewall-cmd --add-service https --permanent
sudo firewall-cmd --add-service http --permanent
sudo firewall-cmd --reload

Install SSL with Certbot

Install certbot: sudo dnf install certbot python3-certbot-apache -y

Obtain an SSL certificate. (See my Obsidian site setup post for information about Certbot and Apache setup.)

sudo certbot -d {subdomain}.{domain}.com

Now log into nextcloud with your admin account using the DNS name you set earlier:

I recommend setting up a normal user account instead of doing everything as “admin”. Just hit the “A” icon at the top right and go to “Accounts”. Then just select “New Account” and create a user account with whatever privileges you want.

I may make a post about which Nextcloud apps I recommend and customize the setup a bit. Let me know if that’s something you’d like to see. That’s all for now.

Make log dir:

mkdir /var/log/nextcloud
touch /var/log/nextcloud.log
chown apache:apache -R /var/log/nextcloud

Change apps to read only

semanage fcontext -a -t httpd_sys_content_t "/var/www/nextcloud/apps(/.*)?"
restorecon -R /var/www/nextcloud/apps

Allow outbound network:

sudo setsebool -P httpd_can_network_connect 1
sudo setsebool -P httpd_graceful_shutdown 1
sudo setsebool -P httpd_can_network_relay 1
sudo ausearch -c 'php-fpm' --raw | audit2allow -M my-phpfpm
sudo semodule -X 300 -i my-phpfpm.pp

Backup

mkdir /home/databkup chown -R apache:apache /home/databkup

vim /root/cleanbackups.sh

#!/bin/bash

find /home/backup -type f -mtime +5 -exec rm {} \;

chmod +x /root/cleanbackups.sh

crontab -e

# Clean up old backups every day at midnight
0 0 * * * /root/cleanbackups.sh > /dev/null 2>&1

# Backup MySQL database every 12 hours
0 */12 * * * bash -c '/usr/bin/mysqldump --single-transaction -u root -p{password} nextcloud > /home/backup/nextclouddb-backup_$(date +"\%Y\%m\%d\%H\%M\%S").bak'

# Rsync Nextcloud data directory every day at midnight
15 0 * * * /usr/bin/rsync -Aavx /var/www/nextcloud/ /home/databkup/ --delete-before

mkdir /home/backup

Update Mariadb:

systemctl stop mariadb.service
dnf module switch-to mariadb:10.11
systemctl start mariadb.service
mariadb-upgrade --user=root --password='{ password }'

mariadb --version

Mimetype migration error: sudo -u apache /var/www/nextcloud/occ maintenance:repair --include-expensive

Indices error: sudo -u apache /var/www/nextcloud/occ db:add-missing-indices

Redis install for memcache

This setup uses Redis for File locking and APCu for memcache

dnf -y install redis php-pecl-redis php-pecl-apcu

systemctl enable --now redis

Add to config.php:

'memcache.locking' => '\OC\Memcache\Redis',
  'memcache.local' => '\OC\Memcache\APCu',
  'redis' => [
   'host'     => '/run/redis/redis-server.sock',
   'port'     => 0,
],

Update /etc/redis/redis.conf vim /etc/redis/redis.conf change port to port 0/

uncomment the socket options under “Unix Socket” and change to:

unixsocket /run/redis/redis-server.sock
unixsocketperm 770

Update permissions for redis usermod -a -G redis apache

Uncomment the line in /etc/php.d/40-apcu.ini and change from 32M to 256M vim /etc/php.d/40-apcu.ini apc.shm_size=256M

Restart apache and redis:

systemctl restart redis
systemctl restart httpd

Added logging and phone region to config.php: mkdir /var/log/nextcloud/

  'log_type' => 'file',
  'logfile' => '/var/log/nextcloud/nextcloud.log',
  'logfilemode' => 416,
  'default_phone_region' => 'US',
  'logtimezone' => 'America/Phoenix',
  'loglevel' => '1',
  'logdateformat' => 'F d, Y H:i:s',

OP-Cache error

Change opcache.interned_strings_buffer to 16 and uncomment:

vim /etc/php.d/10-opcache.ini

opcache.interned_strings_buffer=16

systemctl restart php-fpm httpd

Last job execution ran 2 months ago. Something seems wrong.

Set up cron job for the Apache user: crontab -u apache -e

Add to file that shows up:

*/5  *  *  *  * php -f /var/www/nextcloud/cron.php

Other Errors

  • The “files_reminders” app needs the notification app to work properly. You should either enable notifications or disable files_reminder.

Disabled files_reminder app

  • Server has no maintenance window start time configured. This means resource intensive daily background jobs will also be executed during your main usage time. We recommend to set it to a time of low usage, so users are less impacted by the load caused from these heavy tasks. For more details see the documentation ↗.

Added 'maintenance_window_start' => 1, to config.php

  • Some headers are not set correctly on your instance - The Strict-Transport-Security HTTP header is not set (should be at least 15552000 seconds). For enhanced security, it is recommended to enable HSTS. For more details see the documentation ↗.

Added after closing directory line in SSL config:

vim nextcloud-le-ssl.conf 
Header always set Strict-Transport-Security "max-age=15552000; includeSubDomains; preload"

And add to the bottom of /var/www/nextcloud/.htaccess: "Header always set Strict-Transport-Security "max-age=15552000; includeSubDomains"

Change this line in config.php from localhost to server name:

 'overwrite.cli.url' => 'http://cloud.{ site-name }.com',
  • Integrity checker has been disabled. Integrity cannot be verified.

Ignore this

Your webserver does not serve .mjs files using the JavaScript MIME type. This will break some apps by preventing browsers from executing the JavaScript files. You should configure your webserver to serve .mjs files with either the text/javascript or application/javascript MIME type.

  • sudo vim /etc/httpd/conf.d/nextcloud.conf
  • ad AddType text/javascript .mjs inside the virtual host block. Restart apache.

Your web server is not properly set up to resolve “/ocm-provider/”. This is most likely related to a web server configuration that was not updated to deliver this folder directly. Please compare your configuration against the shipped rewrite rules in “.htaccess” for Apache add to vim /var/www/nextcloud/.htaccess

<IfModule mod_rewrite.c>
  RewriteEngine on
  RewriteRule ^ocm-provider/(.*)$ /index.php/apps/ocm/$1 [QSA,L]
</IfModule>

Your web server is not properly set up to resolve .well-known URLs, failed on: /.well-known/caldav

added to vim /var/www/nextcloud/.htaccess

# .well-known URLs for CalDAV/CardDAV and other services
<IfModule mod_rewrite.c>
  RewriteEngine On
  RewriteRule ^\.well-known/caldav$ /remote.php/dav/ [R=301,L]
  RewriteRule ^\.well-known/carddav$ /remote.php/dav/ [R=301,L]
  RewriteRule ^\.well-known/webfinger$ /index.php/.well-known/webfinger [R=301,L]
  RewriteRule ^\.well-known/nodeinfo$ /index.php/.well-known/nodeinfo [R=301,L]
  RewriteRule ^\.well-known/acme-challenge/.*$ - [L]
</IfModule>

PHP configuration option “output_buffering” must be disabled vim /etc/php.ini

output_buffering = Off

[root@oort31 nextcloud]# echo "output_buffering=off" > .user.ini
[root@oort31 nextcloud]# chown apache:apache .user.ini
chmod 644 .user.ini
[root@oort31 nextcloud]# systemctl restart httpd

Installed tmux dnf -y install tmux

Apps

Disabled File Reminders app

Add fail2ban

sudo dnf -y install fail2ban

vim /etc/fail2ban/jail.local

[DEFAULT]
bantime = 24h
ignoreip = 10.0.0.0/8
usedns = no

[sshd]
enabled = true
maxretry = 3
findtime = 43200
bantime = 86400

systemctl enable –now fail2ban fail2ban-client status sshd

Self hosting a Nextcloud Server

This is a step-by-step guide to setting up Nextcloud on a Debian server. You will need a server hosted by a VPS like Vultr. And a Domain hosted by a DNS provider such as Cloudflare

What is Nextcloud?

Nextcloud is so many things. It offers so many features and options, it deserves a bulleted list:

  • Free and open source
  • Cloud storage and syncing
  • Email client
  • Custom browser dashboard with widgets
  • Office suite
  • RSS newsfeed
  • Project organization (deck)
  • Notebook
  • Calender
  • Task manager
  • Connect to decentralized social media (like Mastodon)
  • Replacement for all of google’s services
  • Create web forms or surveys

It is also free and open source. This mean the source code is available to all. And hosting yourself means you can guarantee that your data isn’t being shared.

As you can see. Nextcloud is feature packed and offers an all in one solution for many needs. The set up is fairly simple!

Install Dependencies

sudo apt update 

Sury Dependencies

sudo apt install software-properties-common ca-certificates lsb-release apt-transport-https 

Enable Sury Repository

sudo sh -c 'echo "deb https://packages.sury.org/php/ $(lsb_release -sc) main" > /etc/apt/sources.list.d/php.list' 

Import the GPG key for the repository

wget -qO - https://packages.sury.org/php/apt.gpg | sudo apt-key add - 

Install PHP 8.2

https://computingforgeeks.com/how-to-install-php-8-2-on-debian/?expand_article=1 (This is also part of the other dependencies install command below)

sudo apt install php8.2 

Install other dependencies:

apt install -y nginx python3-certbot-nginx mariadb-server php8.2 php8.2-{fpm,bcmath,bz2,intl,gd,mbstring,mysql,zip,xml,curl}

Improving Nextcloud server performance

Adding more child processes for PHP to use:

vim /etc/php/8.2/fpm/pool.d/www.conf

# update the following parameters in the file
pm = dynamic
pm.max_children = 120
pm.start_servers = 12
pm.min_spare_servers = 6
pm.max_spare_servers = 18

Start your MariaDB server:

systemctl enable mariadb --now

Set up a SQL Database

Nextcloud needs some tables setup in order to store information in a database. First set up a secure sql database:

sudo mysql_secure_installation

Say “Yes” to the prompts and enter root password:

Switch to unix_socket authentication [Y/n]: Y
Change the root password? [Y/n]: Y	# enter password.
Remove anonymous users? [Y/n]: Y
Disallow root login remotely? [Y/n]: Y
Remove test database and access to it? [Y/n]: Y
Reload privilege tables now? [Y/n]: Y

Sign in to your SQL database with the password you just chose:

mysql -u root -p

Creating a database for NextCloud

While signed in with the mysql command, enter the commands below one at a time. Make sure to replace the username and password. But leave localhost as is:

CREATE DATABASE nextcloud;
GRANT ALL ON nextcloud.* TO 'david'@'localhost' IDENTIFIED BY '@Rfanext12!';
FLUSH PRIVILEGES;
EXIT;

Install SSL with Certbot

Obtain an SSL certificate. See my website setup post for information about Certbot and nginx setup.

certbot certonly --nginx -d nextcloud.example.com

Create a CNAME record for DNS.

You will need to have a domain name set up for your server. I use Cloudflare to manage my DNS records. You will want to make a CNAME record for your nextcloud subdomain.

Just add “nextcloud” as the name and “yourwebsite.com” as the content. This will make it so “nextcloud.yourwebsite.com”. Make sure to select “DNS Only” under proxy status.

Nginx Setup

Edit your sites-available config at /etc/nginx/sites-available/nextcloud. See comments in the following text box:

vim /etc/nginx/sites-available/nextcloud

# Add this to the file:
# replace example.org with your domain name
# use the following vim command to make this easier
# :%s/example.org/perfectdarkmode.com/g
# ^ this will replace all instances of example.org with perfectdarkmode.com. Replace with yur domain

upstream php-handler {
    server unix:/var/run/php/php8.2-fpm.sock;
    server 127.0.0.1:9000;
}
map $arg_v $asset_immutable {
    "" "";
    default "immutable";
}
server {
    listen 80;
    listen [::]:80;
    server_name nextcloud.example.org ;
    return 301 https://$server_name$request_uri;
}
server {
    listen 443      ssl http2;
    listen [::]:443 ssl http2;
    server_name nextcloud.example.org ;
    root /var/www/nextcloud;
    ssl_certificate     /etc/letsencrypt/live/nextcloud.example.org/fullchain.pem ;
    ssl_certificate_key /etc/letsencrypt/live/nextcloud.example.org/privkey.pem ;
    client_max_body_size 512M;
    client_body_timeout 300s;
    fastcgi_buffers 64 4K;
    gzip on;
    gzip_vary on;
    gzip_comp_level 4;
    gzip_min_length 256;
    gzip_proxied expired no-cache no-store private no_last_modified no_etag auth;
    gzip_types application/atom+xml application/javascript application/json application/ld+json application/manifest+json application/rss+xml application/vnd.geo+json application/vnd.ms-fontobject application/wasm application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/bmp image/svg+xml image/x-icon text/cache-manifest text/css text/plain text/vcard text/vnd.rim.location.xloc text/vtt text/x-component text/x-cross-domain-policy;
    client_body_buffer_size 512k;
    add_header Referrer-Policy                      "no-referrer"   always;
    add_header X-Content-Type-Options               "nosniff"       always;
    add_header X-Download-Options                   "noopen"        always;
    add_header X-Frame-Options                      "SAMEORIGIN"    always;
    add_header X-Permitted-Cross-Domain-Policies    "none"          always;
    add_header X-Robots-Tag                         "none"          always;
    add_header X-XSS-Protection                     "1; mode=block" always;
    fastcgi_hide_header X-Powered-By;
    index index.php index.html /index.php$request_uri;
    location = / {
        if ( $http_user_agent ~ ^DavClnt ) {
            return 302 /remote.php/webdav/$is_args$args;
        }
    }
    location = /robots.txt {
        allow all;
        log_not_found off;
        access_log off;
    }
    location ^~ /.well-known {
        location = /.well-known/carddav { return 301 /remote.php/dav/; }
        location = /.well-known/caldav  { return 301 /remote.php/dav/; }
        location /.well-known/acme-challenge    { try_files $uri $uri/ =404; }
        location /.well-known/pki-validation    { try_files $uri $uri/ =404; }
        return 301 /index.php$request_uri;
    }
    location ~ ^/(?:build|tests|config|lib|3rdparty|templates|data)(?:$|/)  { return 404; }
    location ~ ^/(?:\.|autotest|occ|issue|indie|db_|console)                { return 404; }
    location ~ \.php(?:$|/) {
        # Required for legacy support
        rewrite ^/(?!index|remote|public|cron|core\/ajax\/update|status|ocs\/v[12]|updater\/.+|oc[ms]-provider\/.+|.+\/richdocumentscode\/proxy) /index.php$request_uri;
        fastcgi_split_path_info ^(.+?\.php)(/.*)$;
        set $path_info $fastcgi_path_info;
        try_files $fastcgi_script_name =404;
        include fastcgi_params;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_param PATH_INFO $path_info;
        fastcgi_param HTTPS on;
        fastcgi_param modHeadersAvailable true;
        fastcgi_param front_controller_active true;
        fastcgi_pass php-handler;
        fastcgi_intercept_errors on;
        fastcgi_request_buffering off;
        fastcgi_max_temp_file_size 0;
    }
    location ~ \.(?:css|js|svg|gif|png|jpg|ico|wasm|tflite|map)$ {
        try_files $uri /index.php$request_uri;
        add_header Cache-Control "public, max-age=15778463, $asset_immutable";
        access_log off;     # Optional: Don't log access to assets
        location ~ \.wasm$ {
            default_type application/wasm;
        }
    }
    location ~ \.woff2?$ {
        try_files $uri /index.php$request_uri;
        expires 7d;
        access_log off;
    }
    location /remote {
        return 301 /remote.php$request_uri;
    }
    location / {
        try_files $uri $uri/ /index.php$request_uri;
    }
}

Enable the site

Create a link between the file you just made and /etc/nginx/sites-enabled

ln -s /etc/nginx/sites-available/nextcloud /etc/nginx/sites-enabled/

Install Nextcloud

Download the latest Nextcloud version. Then extract into /var/www/. Also, update the file’s permissions to give nginx access:

wget https://download.nextcloud.com/server/releases/latest.tar.bz2
tar -xjf latest.tar.bz2 -C /var/www
chown -R www-data:www-data /var/www/nextcloud
chmod -R 755 /var/www/nextcloud

Start and enable php-fpm on startup

<systemctl enable php8.2fpm --now](><--may not need this.
# Do need this->
sudo systemctl enable php8.2-fpm.service --now

Reload nginx

systemctl reload nginx

Nextcloud occ tool

Here is a built in Nextcloud tool just in case things break. Here is a guide on troubleshooting with occ. The basic command is as follows:

sudo -u www-data php /var/www/nextcloud/occ

Add this as an alias in ~/.bashrc for ease of use.

You are ready to log in to Nextcloud!

Go to your nextcloud domain in a browser. In my case, I head to nextcloud.perfectdarkmode.com. Fill out the form to create your first Nextcloud user:

  • Choose an admin username and secure password.
  • Leave Data folder as the default value.
  • For Database user, enter the user you set for the SQL database.
  • For Database password, enter the password you chose for the new user in MariaDB.
  • For Database name, enter: nextcloud
  • Leave “localhost” as “localhost”.
  • Click Finish.

Now that you are signed in. Here are a few things you can do to start you off:

  • Download the desktop and mobile app and sync all of your data. (covered below)
  • Look at different apps to consolodate your programs all in one place.
  • Put the Nextcloud dashboard as your default browser homepage and customize themes.
  • Set up email integration.

NextCloud desktop synchronization

Install the desktop client (Fedora)

Sudo dnf install nextcloudclient

Install on other distros: https://help.nextcloud.com/t/install-nextcloud-client-for-opensuse-arch-linux-fedora-ubuntu-based-android-ios/13657

  1. Run the nextcloud desktop app and sign in.
  2. Choose folders to sync.
  3. Folder will be ~/Nextcloud.
  4. Move everything into your nextcloud folder.

This may break things with filepaths so beware. Now you are ready to use and explore nextcloud. Here is a video from TechHut to get you started down the NextCloud rabbit hole.

Change max upload size (default is 500mg)

/var/www/nextcloud/.user.ini php_value upload_max_filesize = 16G php_value post_max_size = 16G

Remove file locks

Put Nextcloud in maintenance mode: Edit config/config.php and change this line:
'maintenance' => true,

Empty table oc_file_locks: Use tools such as phpmyadmin or connect directly to your database and run (the default table prefix is oc_, this prefix can be different or even empty):
DELETE FROM oc_file_locks WHERE 1

mysql -u root -p
MariaDB [(none)]> use nextcloud;
MariaDB [nextcloud]> DELETE FROM oc_file_locks WHERE 1;

*figure out redis install if this happens regularly* [https://docs.nextcloud.org/server/13/admin_manual/configuration_server/caching_configuration.html#id4 9.1k](https://docs.nextcloud.org/server/13/admin_manual/configuration_server/caching_configuration.html#id4)

Using Vagrant on Linux

Vagrant is software that lets you set up multiple, pre-configured virtual machines in a flash. I am going to show you how to do this using Linux and Virtual Box. But you can do this on MacOS and Windows as well.

Download Vagrant, VirtualBox and Git.

Vagrant link.

Virtualbox link.

You may want to follow another tutorial for setting up VirtualBox.

Git link.

Installing git will install ssh on windows. Which you will use to access your lab. Just make sure you select the option to add git and unit tools to your PATH variable.

Make a Vagrant project folder.

Note: All of these commands are going to be in a Bash command prompt.

mkdir vagranttest

Move in to your new directory.

cd vagranttest

Add and Initialize Your Vagrant Project.

You can find preconfigured virtual machines here.

We are going to use ubuntu/trusty64.

Add the Vagrant box

vagrant box add ubuntu/trusty64

Initialize your new Vagrant box

vagrant init ubuntu/trusty64

Use the dir command to see the contents of this directory.

We are going to edit this Vagrantfile to set up our multiple configurations.

vim Vagrantfile

Here is the new config without all of the commented lines. Add this (minus the top line) under Vagrant.configure(“2”) do |config|.

Vagrant.configure("2") do |config|  
  config.vm.box = "ubuntu/trusty64"  
  config.vm.define "server1" do |server1|  
    server1.vm.hostname = "server1"  
    server1.vm.network "private_network", ip: "10.1.1.2"  
  end  
  config.vm.define "server2" do |server2|  
    server2.vm.hostname = "server2"  
    server2.vm.network "private_network", ip: "10.1.1.3"  
  end  
end

Now save your Vagrant file in Vim.

Bring up your selected vagrant boxes:

vagrant up

Now if you open virtual box, you should see the new machines running in headless mode. This means that the machines have no user interface..

Ssh into server1

vagrant ssh server1

You are now in serve1’s terminal.

From server1, ssh into server2

ssh 10.1.1.3

Success! You are now in server2 and can access both machines from your network. Just enter “exit” to return to the previous terminal.

Additional Helpful Vagrant Commands.

Without the machine name specified, vagrant commands will work on all virtual machines in your vagrant folder. I’ve thrown in a couple examples using [machine-name] at the end.

Shut down Vagrant machines

vagrant halt

Shut down only one machine

vagrant halt [machine-name]

Suspend and resume a machine

vagrant suspend
vagrant resume

Restart a virtual machine

vagrant reload

Destroy a virtual machine

vagrant detstroy [machine-name]

Show running vms

vagrant status

List Vagrant options

vagrant

Playground for future labs

This type of deployment is going to be the bedrock of many Linux and Red Hat labs. You can easily use pre-configured machines to create a multi-machine environment. This is also a quick way to test your network and server changes without damaging anything.

Now go set up a Vagrant lab yourself and let me know what you plan to do with it!

What is Vagrant?

  • Easy to configure, reproducible environments
  • Provisions virtualbox vms
  • Vagrant box: OS image

Syntax:

vagrant box add user/box

Add centos7 box

vagrant box add jasonc/centos7

Many public boxes to download

Vagrant project = folder with a vagrant file

Install Vagrant here: https://www.vagrantup.com/downloads

Make a vagrant folder:

mkdir vm1
cd vm1

initialize vagrant project:

vagrant init jasonc/centos7

bring up all vms defined in the vagrant file)

vagrant up

vagrant will import the box into virtualbox and start it

the vm is started in headless mode

(there is no user interfaces)

Vagrant up / multi machine

Bring up only one specific vm

  • vagrant up [vm-name]

SSH Vagrant

  • vagrant ssh [vm_name] or vagrant ssh if there is only one vm in the vagrant file

Need to download ssh for windows

downloading git will install this:

https://desktop.github.com/

Shut down vagrant machines vagrant halt

Shutdown only one machine vagrant halt [vm]

Saves present state of the machine

just run vagrant up without having to import tha machines again

Suspend the machine vagrant suspend [VM]

Resume vagrant resume [VM]

Destroy VM vagrant destroy [VM]

List options vagrant

Vagrant command works on the vagrant folder that you are in

Vagrant File

Vagrant.configure (2) do | config |

config.vm.box = "jasonc/centos7"

config.vm.hostname = "linuxsvr1"

(default files)

config.vm.network "private_network", ip: "10.2.3.4"

config.vm.provider "virtualbox" do | vbi

vb.gui = true

vb.memory = "1024"

(shell provisioner)

config.vm.provision "shell", path: "setup.sh"

end

end

Configuring a multi machine setup:

Specify common configurations at the top of the file

Vagrant.configure (2) do | config |

config.vm.box = "jasonc/centos7"

config.vm.define = "server1" do | server1 |

server1.vm.hostname = "server1"

server1.vm.network "private_network", ip: "10.2.3.4"

end

config.vm.define = "server2" do | server2 |

server2.vm.hostname = "server2"

server2.vm.network "private_network", ip: "10.2.3.5"

end

end

You can search for vagrant boxes at https://app.vagrantup.com/boxes/search

Course software downloads: http://mirror.linuxtrainingacademy.com/

Install Git: https://git-scm.com/download/win

  • make sure to check option for git and unit tools to be added to the PATH

vagrant ssh

  • to connect to the vagrant machine in the folder that you are in
  • default password in vagrant
  • tyoe ’exit to return to prompt

vagrant halt

  • stop the vm and save it’s current state

vagrant reload

  • restarts the vm

vagrant status

  • shows running vms in that folder

You can access files in the vagrant directory from both VMs

Example RHEL8 Config

Vagrant.configure("2") do |config|

config.vm.box = "generic/rhel8"

config.vm.define "server1" do |server1|

server1.vm.hostname = "server1.example.com"

server1.vm.network "private_network", ip: "192.168.1.110"

config.disksize.size = '10GB'

end

config.vm.define "server2" do |server2|

server2.vm.hostname = "server2.example.com"

server2.vm.network "private_network", ip: "192.168.1.120"

config.disksize.size = '16GB'

end

config.vm.provider "virtualbox" do |vb|

vb.memory = "2048"

end

end

Plugin to change the disk size:

vagrant plugin install vagrant-disksize

Vim Guide

Vim (Vi Improved)

Vim stands for vi (Improved) just like its name it stands for an improved version of the vi text editor command.

Lightweight

Start Vim

vim

Vim Search Patterns

Moving the Cursor

h or left arrow - move left one character k or up arrow - move up one line j or down arrow - move down one line l or right arrow - will move you right one character

Different Vim Modes

I - Enter INSERT mode from command mode esc - Go back to command mode v - visual mode

Vim Appending Text

In enter while in command mode and will bring you to insert mode.

I - insert text before the cursor O - insert text on the previous line o - insert text on the next line a - append text after cursor A - append text at the end of the line

Vim editing

x - used to cut the selected text also used for deleting characters dd - used to delete the current line y - yank or copy whatever is selected yy - yank or copy the current line p - paste the copied text before the cursor

Vim Saving and exiting

:w - writes or saves the file :q - quit out of vim :wq - write and then quit :q! - quit out of vim without saving the file ZZ - equivalent of :wq, but one character faster

u - undo your last action Ctrl-r - redo your last action :% sort - Sort lines

Vim Splits

Add to .vimrc for different key mappings for easy navigation between splits to save a keystroke. So instead of ctrl-w then j, it’s just ctrl-j:

nnoremap <C-J> <C-W><C-J> 
nnoremap <C-K> <C-W><C-K> 
nnoremap <C-L> <C-W><C-L> 
nnoremap <C-H> <C-W><C-H>

Open file in new split

:vsp filename

https://github.com/preservim/nerdtree

Find and Replace

https://linuxize.com/post/vim-find-replace/

Find and Replace Text in File(s) with Vim

Find and replace in a single file

Open the file in Vim, this command will replace all occurances of the word “foo” with “bar”.

:%s/foo/bar/g

% - apply to whole file s - substitution g - operate on all results

Find and replace a string in all files in current directory

In vim, select all files with args. Use regex to select the files you want. Select all files with *

:args *

You can also select all recursively:

:args **

Run :args to see which files are selected"

:args

Perform substitution with argdo

This applies the replacement command to all selected args:

:argdo %s/foo/bar/g | update

Nerd Tree Plugin

Add to .vimrc

call plug#begin()
Plug 'preservim/nerdtree'

call plug#end()

nnoremap <leader>n :NERDTreeFocus<CR>
nnoremap <C-n> :NERDTree<CR>
nnoremap <C-t> :NERDTreeToggle<CR>
nnoremap <C-f> :NERDTreeFind<CR>

Vim Calendar

dhttps://blog.mague.com/?p=602

Add to vim.rc

:auto FileType vim/wiki map d :Vim/wikiMakeDiaryNote
function! ToggleCalendar()
  execute ":Calendar"
  if exists("g:calendar_open")
    if g:calendar_open == 1
      execute "q"
      unlet g:calendar_open
    else
      g:calendar_open = 1
    end
  else
    let g:calendar_open = 1
  end
endfunction
:auto FileType vim/wiki map c :call ToggleCalendar()i

Vimwiki

Cheat sheet

http://thedarnedestthing.com/vimwiki%20cheatsheet

Set up

Make sure git is installed? https://github.com/git-guides/install-git

Check git version

git --version Check git version

dnf git install

sudo dnf install git-all

https://github.com/junegunn/vim-plug

Download plug.vim and put it in ~/.vim/autoload

curl -fLo ~/.vim/autoload/plug.vim --create-dirs \
    https://raw.githubusercontent.com/junegunn/vim-plug/master/plug.vim

Create ~/.vimrc

touch ~/.vimrc

Add to ~/.vimrc

Installation using Vim-Plug

Install Vim Plug

curl -fLo ~/.vim/autoload/plug.vim --create-dirs \https://raw.githubusercontent.com/junegunn/vim-plug/master/plug.vim

Add the following to the plugin-configuration in your vimrc:

set nocompatible  
filetype plugin on  
syntax on

call plug#begin()  
Plug 'vimwiki/vimwiki'

call plug#end()

let mapleader=" "  
let wiki_1 = {}  
let wiki_1.path = '~/Documents/PerfectDarkMode/'  
let wiki_1.syntax = 'markdown'  
let wiki_1.ext = ''  
let wiki_2 = {}  
let wiki_2.path = '~/Documents/vim/wiki_personal/'  
let wiki_2.syntax = 'markdown'  
let wiki_2.ext = ''  
let g:vimwiki_list = [wiki_1, wiki_2]  

Then run :PlugInstall.

(leader)ws select which wiki to use

Basic Markup

= Header1 =
== Header2 ==
=== Header3 ===


*bold* -- bold text
_italic_ -- italic text

[wiki link](wiki%20link) -- wiki link
[description](wiki%20link) -- wiki link with description 

Lists

* bullet list item 1
    - bullet list item 2
    - bullet list item 3
        * bullet list item 4
        * bullet list item 5
* bullet list item 6
* bullet list item 7
    - bullet list item 8
    - bullet list item 9

1. numbered list item 1
2. numbered list item 2
    a) numbered list item 3
    b) numbered list item 4 

For other syntax elements, see :h vimwiki-syntax

Vimwiki Table of Contents

:VimwikiTOC Create or update the Table of Contents for the current wiki file. See |vimwiki-toc|.

Table of Contents vimwiki-toc vimwiki-table-of-contents

You can create a “table of contents” at the top of your wiki file. The command |:VimwikiTOC| creates the magic header > = Contents = in the current file and below it a list of all the headers in this file as links, so you can directly jump to specific parts of the file.

For the indentation of the list, the value of |vimwiki-option-list_margin| is used.

If you don’t want the TOC to sit in the very first line, e.g. because you have a modeline there, put the magic header in the second or third line and run :VimwikiTOC to update the TOC.

If English is not your preferred language, set the option |g:vimwiki_toc_header| to your favorite translation.

If you want to keep the TOC up to date automatically, use the option |vimwiki-option-auto_toc|.

vimwiki-option-auto_toc

Key Default value Values~ auto_toc 0 0, 1

Description~ Set this option to 1 to automatically update the table of contents when the current wiki page is saved: > let g:vimwiki_list = [{‘path’: ‘~/my_site/’, ‘auto_toc’: 1}]

vimwiki-option-list_margin

Key Default value~ list_margin -1 (0 for markdown)

Description~ Width of left-hand margin for lists. When negative, the current ‘shiftwidth’ is used. This affects the appearance of the generated links (see |:VimwikiGenerateLinks|), the Table of contents (|vimwiki-toc|) and the behavior of the list manipulation commands |:VimwikiListChangeLvl| and the local mappings |vimwiki_glstar|, |vimwiki_gl#| |vimwiki_gl-|, |vimwiki_gl-|, |vimwiki_gl1|, |vimwiki_gla|, |vimwiki_glA|, |vimwiki_gli|, |vimwiki_glI| and |vimwiki_i__|.

Note: if you use Markdown or MediaWiki syntax, you probably would like to set this option to 0, because every indented line is considered verbatim text.

g:vimwiki_toc_header_level

The header level of the Table of Contents (see |vimwiki-toc|). Valid values are from 1 to 6.

The default is 1.

g:vimwiki_toc_link_format

The format of the links in the Table of Contents (see |vimwiki-toc|).

Value Description~ 0 Extended: The link contains the description and URL. URL references all levels. 1 Brief: The link contains only the URL. URL references only the immediate level.

Default: 0

Key bindings

Normal mode

Note: your terminal may prevent capturing some of the default bindings listed below. See :h vimwiki-local-mappings for suggestions for alternative bindings if you encounter a problem.

Basic key bindings

  • <Leader>ww – Open default /wiki index file.
  • <Leader>wt – Open default /wiki index file in a new tab.
  • <Leader>ws – Select and open /wiki index file.
  • <Leader>wd – Delete /wiki file you are in.
  • <Leader>wr – Rename /wiki file you are in.
  • <Enter> – Follow/Create /wiki link.
  • <Shift-Enter> – Split and follow/create /wiki link.
  • <Ctrl-Enter> – Vertical split and follow/create /wiki link.
  • <Backspace> – Go back to parent(previous) /wiki link.
  • <Tab> – Find next /wiki link.
  • <Shift-Tab> – Find previous /wiki link.

Advanced key bindings

Refer to the complete documentation at :h vimwiki-mappings to see many more bindings.

Commands

  • :Vimwiki2HTML – Convert current wiki link to HTML.
  • :VimwikiAll2HTML – Convert all your wiki links to HTML.
  • :help vimwiki-commands – List all commands.
  • :help vimwiki – General vimwiki help docs.

Diary

alias

alias todo=‘vim -c VimwikiDiaryIndex’

Hotkeys

:VimwikiDiaryGenerateLinks ^w^i Generate links ^w^w open today ^wi Open diary index ctrl + up previous day ctrl + down next day

  • How to create Weekly, Monthly, and yearly notes
  • How to do a template for daily
  • set folder location for diary

Diary Template

https://frostyx.cz/posts/vimwiki-diary-template

Nested folder structure

[dev](dev/ndex)

Say yes to make new directory

wiki

Convert to html live and shows some design stuff https://www.youtube.com/watch?v=A1YgbAp5YRc

https://github.com/Dynalonwiki

Taskwarrior

https://www.youtube.com/watch?v=UuHJloiDErM requires neovim?

taskwiki

vimwiki integration with task warrior https://github.com/tools-life/taskwiki https://www.youtube.com/watch?v=UuHJloiDErM

Ctrl P

Install

Plug ‘ctrlpvim/ctrlp.vim’

You Need to Learn Man Pages

https://www.youtube.com/watch?v=RzAkjX_9B7E&t=295s

Man (manual) pages are the built in help system for Linux. They contain documentation for most commands.

Run the man command on a command to get to it’s man page. man man

Navigating a man page h

  • Get help

q

  • Quit out of the man page

Man uses less

^ mean ctrl

^f Forward one page

^b backward one page

can use # followed by command to repeat that many times

g first line in file

G last line in file

CR means press enter

Searching

/searchword

press enter to jump first occurance of searched word

n to jump to next match

N to go to previous match

?searchword to do a backward search (n and N are reversed when going through results)

Man page conventions

bold text type as shown

italic text replace with arguments

  • Italic may not render in terminal and may be underlined or colored text instead.

[-abc] optional

-a | -b Options separated by a pipe symbol cannot be used together.

argument … (followed by 3 dots) can be repeated. (Argument is repeatable)

[expression] … entire expression within [ ] is repeatable.

Parts of a man page

Name

  • name of command

Synopsis

  • How to use the command

When you see file in a man page, think file and or directory

Description short and long options do the same thing

Current section number is printed at the top left of the man page.

-k to search sections using apropos

[root@server30 ~]# man -k unlink
mq_unlink (2)        - remove a message queue
mq_unlink (3)        - remove a message queue
mq_unlink (3p)       - remove a message queue (REALT...
sem_unlink (3)       - remove a named semaphore
sem_unlink (3p)      - remove a named semaphore
shm_open (3)         - create/open or unlink POSIX s...
shm_unlink (3)       - create/open or unlink POSIX s...
shm_unlink (3p)      - remove a shared memory object...
unlink (1)           - call the unlink function to r...
unlink (1p)          - call theunlink() function
unlink (2)           - delete a name and possibly th...
unlink (3p)          - remove a directory entry
unlinkat (2)         - delete a name and possibly th...]

Shows page number in ()

The sections that end in p are POSIX documentation. Theese are not specific to Linux.

[root@server30 ~]# man -k "man pages"
lexgrog (1)          - parse header information in man pages
man (7)              - macros to format man pages
man-pages (7)        - conventions for writing Linux man pages
man.man-pages (7)    - macros to format man pages
[root@server30 ~]# man man-pages

Use man-pages to learn more about man pages

Sections within a manual page
       The list below shows conventional or suggested sections.  Most manual
       pages should include at least the highlighted  sections.   Arrange  a
       new manual page so that sections are placed in the order shown in the
       list.

              NAME
              LIBRARY          [Normally only in Sections 2, 3]
              SYNOPSIS
              CONFIGURATION    [Normally only in Section 4]
              DESCRIPTION
              OPTIONS          [Normally only in Sections 1, 8]
              EXIT STATUS      [Normally only in Sections 1, 8]
              RETURN VALUE     [Normally only in Sections 2, 3]
              ERRORS           [Typically only in Sections 2, 3]
              ENVIRONMENT
              FILES
              ATTRIBUTES       [Normally only in Sections 2, 3]
              VERSIONS         [Normally only in Sections 2, 3]
              STANDARDS
              HISTORY
              NOTES
              CAVEATS
              BUGS
              EXAMPLES
              AUTHORS          [Discouraged]
              REPORTING BUGS   [Not used in man-pages]
              COPYRIGHT        [Not used in man-pages]
              SEE ALSO

Shell builtins do not have man pages. Look at the shell man page for info on them. man bash

Search for the Shell Builtins section: /SHELL BUILTIN COMMANDS

You can find help on builtins with the help command:

david@fedora:~$ help hash
hash: hash [-lr] [-p pathname] [-dt] [name ...]
    Remember or display program locations.
    
    Determine and remember the full pathname of each command NAME.  If
    no arguments are given, information about remembered commands is displayed.
    
    Options:
      -d	forget the remembered location of each NAME
      -l	display in a format that may be reused as input
      -p pathname	use PATHNAME as the full pathname of NAME
      -r	forget all remembered locations
      -t	print the remembered location of each NAME, preceding
    		each location with the corresponding NAME if multiple
    		NAMEs are given
    Arguments:
      NAME	Each NAME is searched for in $PATH and added to the list
    		of remembered commands.
    
    Exit Status:
    Returns success unless NAME is not found or an invalid option is given.

help without any arguments displays commands you can get help on.

david@fedora:~/Documents/davidvargas/davidvargasxyz.github.io$ help help
help: help [-dms] [pattern ...]
    Display information about builtin commands.
    
    Displays brief summaries of builtin commands.  If PATTERN is
    specified, gives detailed help on all commands matching PATTERN,
    otherwise the list of help topics is printed.
    
    Options:
      -d	output short description for each topic
      -m	display usage in pseudo-manpage format
      -s	output only a short usage synopsis for each topic matching
    		PATTERN
    
    Arguments:
      PATTERN	Pattern specifying a help topic
    
    Exit Status:
    Returns success unless PATTERN is not found or an invalid option is given.

type command tells you what type of command something is.

Using man on some shell builtins brings you to the bash man page Shell Builtin Section

Many commands support -h or --help options to get quick info on a command.

Subsections of Users and Groups

Advanced User Management

Local User Authentication Files

  • Three supported account types: root, normal, service
  • root
    • has full access to all services and administrative functions on the system.
    • created by default during installation.
  • Normal
    • user-level privileges
    • cannot perform any administrative functions
    • can run applications and programs that have been authorized.
  • Service
    • take care of their respective services, which include apache, ftp, mail, and chrony.
  • User account information for local users is stored in four files that are located in the /etc directory.
    • passwd, shadow, group, and gshadow (user authentication files)
    • updated when a user or group account is created, modified, or deleted.
    • referenced to check and validate the credentials for a user at the time of their login attempt,
    • system creates their automatic backups by default as passwd-, shadow-, group-, and gshadow- in the /etc directory.

/etc/passwd

  • vital user login data
  • each row hold info for one user
  • 644 permissions by default
  • 7 feilds per row
    • login name
      • up to 255 characters
      • _ and - characters are supported
      • not recommended to include special characters and uppercase letters in login names.
    • password
      • “x” in this field points to /etc/shadow for actual password.
      • “*” identifies disabled account
      • Can also include a hashed password (RHEL uses SHA-512 by default)
    • UID
      • Number between 0 and 4.2 billion
      • UID 0 is reserved for root account
      • UIDs 1-200 are used by Red Hat for core service accounts
      • UIDs 201-999 are reserved for non-core service accounts
      • UIDs 1000 < are for normal user accounts (starts at 1000 by default)
    • GID
      • GID that matches entry in /etc/group (primary group)
      • Group for every user by default that matches UID
    • Comments (GECOS) or (GCOS)
      • general comments about the user
    • Home Directory
      • absolute path to the user home directory.
    • Shell
      • absolute path of the shell file for the user’s primary shell after logging in. (default = (/bin/bash))

/etc/shadow

  • no access permissions for any user (even root) (but owned by root)
  • secure password control (shadow password)
  • user passwords are hashed and stored in a more secure file /etc/shadow
  • limits on user passwords in terms of expiration, warning period, etc. applied on per-user basis
  • limits and other settings are defined in /etc/login.defs
  • user is initially checked in the passwd file for existence and then in the shadow file for authenticity.
  • contains user authentication and password aging information.
  • Each row in the file corresponds to one entry in the passwd file.
  • login names are used as a common key between the shadow and passwd files.
  • nine colon-separated fields per line entry.
    • 1 Login name
    • 2 Encrypted password
      • ! at the beginning of this field shows that the user account is locked
      • if field is empty then user has passwordless entry
    • 3 last change
      • Number of days (lastchg) since the UNIX epoch, (UNIX time (January 01, 1970 00:00:00 UTC) when the password was last modified.
      • Empty field represents the passiveness of password aging features.
      • 0 forces the user to change their password upon next login.
    • 4 minimum
      • number of days (mindays) that must elapse before the user is allowed to change their password
      • can be altered using the chage command with the -m option or the passwd command with the -n option.
      • 0 or null in this field disables this feature.
    • 5 (Maximum)
      • maximum number of days (maxdays) before the user password expires and must be changed.
      • may be altered using the chage command with the -M option or the passwd command with the -x option.
      • null value here disables this feature along with other features such as the maximum password age, warning alerts, and the user inactivity period.
    • 6 Field 6 (Warning)
      • number of days (warndays) the user gets warnings for changing their password before it expires.
      • may be altered using the chage command with the -W option or the passwd command with the -w option.
      • 0 or null in this field disables this feature.
    • 7 Password Expiry)
      • maximum allowable number of days for the user to be able to log in with the expired password. (inactivity period).
      • may be altered using the chage command with the -I option or the passwd command with the -i option.
      • empty field disables this feature.
    • 8 (Account Expiry)
      • number of days since the UNIX time when the user account will expire and no longer be available.
      • may be altered using the chage command with the -E option.
      • empty field disables this feature.
    • 9 (Reserved): Reserved for future use.

/etc/group

  • plaintext file and contains critical group information.
  • 644 permissions by default and owned by root.
  • Each row in the file stores information for one group entry.
  • Every user on the system must be a member of at least one group (User Private Group (UPG)).
  • a group name matches the username it is associated with by default
  • four colon-separated fields per line entry.
    • Field 1 (Group Name):
      • Holds a group name that must begin with a letter. Group names with up to 255 characters, including the
      • uppercase, underscore (_) and hyphen (-) characters, are also supported. (not recommended)
    • Field 2 (Encrypted Password):
      • Can be empty or contain an “x” (points to the /etc/gshadow file for the actual password), or a hashed group-level password.
      • can set a password on a group for non-members to be able to change their group identity temporarily using the newgrp command.
      • non-members must enter the correct password in order to do so.
    • Field 3 (GID):
      • Holds a GID, that is also placed in the GID field of the passwd file.
      • By default, groups are created with GIDs starting at 1000 and with the same name as the username.
      • system allows several users to belong to a single group
      • also allows a single user to be a member of multiple groups at the same time.
    • Field 4 (Group Members):
      • Lists the membership for the group. (user’s primary group is always defined in the GID field of the passwd file.)

/etc/gshadow

  • no access permissions for any user (even root)
  • group passwords are hashed and stored
  • group names are used as a common key between the gshadow and group files.
  • 000 permissions and owned by root
  • four colon-separated fields
    • Field 1 (Group Name):
      • Consists of a group name as appeared in the group file.
    • Field 2 (Encrypted Password):
      • Can contain a hashed password, which may be set with the gpasswd command for non-group members to access the group temporarily using the newgrp command.
      • single exclamation mark (!) or a null value in this field allows group members password-less access and restricts non-members from switching into this group.
    • Field 3 (Group Administrators):
      • Lists usernames of group administrators that are authorized to add or remove members with the gpasswd command.
    • Field 4 (Members):
      • comma-separated list of members.

gpasswd command:

  • add group administrators.
  • add or delete group members.
    • assign or revoke a group-level password.
    • disable the ability of the newgrp command to access a group.
    • picks up the default values from the /etc/login.defs file.

useradd and login.defs configuration files

useradd command

  • picks up the default values from the /etc/default/useradd and /etc/login.defs files for any options that are not specified at the command line when executing it.
  • login.defs file is also consulted by the usermod, userdel, chage, and passwd commands
  • Both files store several defaults including those that affect the password length and password lifecycle. /etc/default/useradd Default Directives:
  • starting GID (GROUP) (provided the USERGROUPS_ENAB directive in the login.defs file is set to no)
  • home directory location (HOME)
  • number of inactivity days between password expiry and permanent account disablement (INACTIVE)
  • account expiry date (EXPIRE),
  • login shell (SHELL),
  • skeleton directory location to copy user initialization files from (SKEL)
  • whether to create mail spool directory (CREATE_MAIL_SPOOL)

/etc/login.defs default directives:

MAIL_DIR

  • mail directory location

PASS_MAX_DAYS, PASS_MIN_DAYS, PASS_MIN_LEN, and PASS_WARN_AGE

  • password aging attributes.

UID_MIN, UID_MAX, GID_MIN, and GID_MAX

  • ranges of UIDs and GIDs to be allocated to new users and groups

SYS_UID_MIN, SYS_UID_MAX, SYS_GID_MIN, and SYS_GID_MAX

  • ranges of UIDs and GIDs to be allocated to new service users and groups

CREATE_HOME

  • whether to create a home directory

UMASK

  • permissions to be set on the user home directory at creation based on this umask value

USERGROUPS_ENAB

  • whether to delete a user’s group (at the time of user deletion) if it contains no more members

ENCRYPT_METHOD

  • encryption method for user passwords

Password Aging attributes

  • Can be done for an individual user or applied to all users.
  • Can prevent users from logging in to the system by locking their access for a period of time or permanently.
  • Must be performed by a user with elevated privileges of the root user.
  • Normal users may be allowed access to privileged commands by defining them appropriately in a configuration file.
  • Each file that exists on the system regardless of its type has an owning user and an owning group.
  • every file that a user creates is in the ownership of that user.
  • ownership may be changed and given to another user by a super user.

Password Aging and management

  • Setting restrictions on password expiry, account disablement, locking and unlocking users, and password change frequency.
  • Can choose to inactivate it completely for an individual user.
  • Stored in the /etc/shadow file (fields 4 to 8) and its default policies in the /etc/login.defs configuration file.
  • aging management tools—chage and passwd
  • usermod command can be used to implement two aging attributes (user expiry and password expiry) and lock and unlock user accounts.

chage command

  • Set or alter password aging parameters on a user account.
  • Changes various fields in the shadow file
  • Switches
    • -d (–lastday)
      • Specifies an explicit date in the YYYY-MM-DD format, or the number of days since the UNIX time when the password was last modified. With -d 0, the user is forced to change the password at next login. It corresponds to field 3 in the shadow file.
    • -E (–expiredate)
      • Sets an explicit date in the YYYY-MM-DD format, or the number of days since the UNIX time on which the user account is deactivated. This feature can be disabled with -E -1. It corresponds to the eighth field in the shadow file.
    • -I (–inactive)
      • Defines the number of days of inactivity after the password expiry and before the account is locked. The user may be able to log in during this period with their expired password. This feature can be disabled with -I -1. It corresponds to field 7 in the shadow file.
    • -l
      • Lists password aging attributes set on a user account.
    • -m (–mindays)
      • Indicates the minimum number of days that must elapse before the password can be changed. A value of 0 allows the user to change their password at any time. It corresponds to field 4 in the shadow file.
    • -M (–maxdays)
      • Denotes the maximum number of days of password validity before the user password expires and it must be changed. This feature can be disabled with -M -1. It corresponds to field 5 in the shadow file.
    • -W (–warndays)
      • Designates the number of days for which the user gets alerts to change their password before it expires. It corresponds to field 6 in the shadow file.

passwd command

  • set or modify a user’s password
  • modify the password aging attributes and
  • lock or unlock account
  • Switches
    • -d (–delete)
      • Deletes a user password
      • does not expire the user account.
    • -e (–expire)
      • Forces a user to change their password upon next logon.
      • sets date to prior to Unix time
    • -i (–inactive)
      • Defines the number of days of inactivity after the password expiry and before the account is locked. (field 7 in shadow file)
    • -l (–lock)
      • Locks a user account.
    • -n (–minimum)
      • Specifies the number of days that must elapse before the password can be changed. (field 4 in shadow file)
    • -S (–status)
      • Displays the status information for a user.
    • -u (–unlock)
      • Unlocks a locked user account.
    • -w (–warning)
      • Designates the number of days for which the user gets alerts to change their password before it actually expires. (field 6 in shadow file)
    • -x (–maximum)
      • Denotes the maximum number of days of password validity before the user password expires and it must be changed. (field 5 in shadow file)

usermod command

  • Modify a user’s attribute
  • Lock or unlock their account
  • Switches
    • -L (–lock)
      • Locks a user account by placing a single exclamation mark (!) at the beginning of the password field and before the hashed password string.
    • -U (–unlock)
      • Unlocks a user’s account by removing the exclamation mark (!) from the beginning of the password field.

Linux Groups and their Management

  • /etc/group
    • group info
  • /etc/login.defs
    • default policies
  • /etc/gshadow
    • group administrator information and group-level passwords
  • group management tools
    • groupadd, groupmod, and groupdel
    • create, alter, and erase groups

groupadd command

  • adds entries to the group and gshadow files for each group added to the system
  • picks up default values from /etc/login.defs
  • Switches
    • -g (–gid)
      • Specifies the GID to be assigned to the group
    • -o (–non-unique)
      • Creates a group with a matching GID of an existing group. When two groups have an identical GID, members of both groups get identical rights on each other’s files. This should only be done in specific situations.
    • -r
      • Creates a system group with a GID below 1000
    • groupname
      • Specifies a group name

groupmod command

  • syntax of this command is very similar to the groupadd with most options identical.
  • Additional flags
    • -n
      • change name of existing group

User Management

Switching Users

su command

Ctrl-d - return to previous user su - - switch user with startup scripts -c - issue a command as a user without switching to them.

  • root user can switch into any user account that exists on the system without being prompted for that user’s password.
  • switching into the root account to execute privileged actions is not recommended.

whoami command

  • show current user

logname command

  • Identity of the user who originally logged in.

groupdel command

  • removes entries for the specified group from both group and gshadow files.

Doing as Superuser (substitute user)

  • Any normal user that requires privileged access to administrative commands or non-owning files is defined in the sudoers file.
    • File may be edited with a command called visudo
    • Creates a copy of the file as sudoers.tmp and applies the changes there. After the visudo session is over, the updated updated file overwrites the original sudoers file and sudoers.tmp is deleted.
    • syntax
      • user1 ALL=(ALL) ALL
      • %dba ALL=(ALL) ALL group is prefixed by %
    • Make it so members are not prompted for password
      • user1 ALL=(ALL) NOPASSWD:ALL
      • %dba ALL=(ALL) NOPASSWD:ALL
    • Limit access to a single command
      • user1 ALL=/usr/bin/cat
      • %dba ALL=/usr/bin/cat
  • too many entries can clutter sudoers file. Use aliases instead:
    • User_Alias
      • you can define a User_Alias called PKGADM for user1, user100, and user200. These users may or may not belong to the same Linux group.
    • Cmnd_Alias
      • you can define a Cmnd_Alias called PKGCMD containing yum and rpm package management commands

sudo command

  • /etc/sudoers
  • /etc/sudoers.d/
    • drop-in directory /var/log/secure
    • Sudo logs successful authentication and command data to here under the name of the user using the command.

Owning User and Owning Group

  • Every file and directory has an owner.
  • Creator assumes ownership by default.
  • Every user is a member of one or more groups.
  • Owners group is also assigned to file or directory by default.

chown command

  • alter the ownership for files and directories
  • Must have root privileges.
  • Can also change owning group.

chgrp command

  • alter the owning group for files and directories
  • Must have root privileges.

Advanced User Management Labs

Lab: Set and Confirm Password Aging with chage (root)

  1. Set password aging parameters for user100 to mindays (-m) 7, maxdays (-M) 28, and warndays (-W) 5:
chage -m 7 -M 28 -W 5 user100
  1. Confirm
chage -l user100
  1. Set the account expiry to January 31, 2020
chage -E 2020-01-31 user100
  1. Verify the new account expiry setting
chage -l user100

Lab: Set and Confirm Password Aging with passwd (root)

  1. Set password aging attributes for user200 to mindays 10, maxdays 90, and warndays 14:
passwd -n 10 -x 90 -w 14 user200
  1. Confirm:
passwd -S user200
  1. Set the number of inactivity days to 5:
passwd -i 5 user200
  1. Confirm:
passwd -S user200
  1. Ensure that the user is forced to change their password at next login:
passwd -e user200
  1. Confirm:
passwd -S user200

Lab: Lock and Unlock a User Account with usermod and passwd (root)

  1. Obtain the current password information for user200 from the shadow file:
grep user200 /etc/shadow
  1. Lock the account for user200:
usermod -L user200 
  1. Confirm:
grep user200 /etc/shadow
  1. Unlock the account with either of the following:
usermod -U user200
or
passwd -u user200
  1. confirm
grep user200 /etc/shadow

Lab: Create a Group and Add Members (root)

  1. Create the group linuxadm with GID 5000:
groupadd -g 5000 linuxadm
  1. Create a group called dba with the same GID as that of group linuxadm:
groupadd -o -g 5000 dba
  1. Confirm:
grep linuxadm /etc/group
grep dba /etc/group
  1. Add user1 as a secondary member of group dba using the usermod command. The existing membership for the user must remain intact.
usermod -aG dba user1
  1. Verify the updated group membership information for user1 by extracting the relevant entry from the group file, and running the id and groups command for user1:
grep dba /etc/group
id user1
groups user1

Lab: Modify and Delete a Group Account (root)

  1. Alter the name of linuxadm to sysadm:
groupmod -n sysadm linuxadm
  1. Change the GID of sysadm to 6000:
groupmod -g 6000 sysadm
  1. Confirm:
grep sysadm /etc/group
grep linuxadm /etc/group
  1. Delete sysadm group and confirm:
groupdel sysadm
grep sysadm /etc/group

Lab: To switch from user1 (assuming you are logged in as user1) into root without executing the startup scripts

su
  1. switch to user100
su - user100
  1. See what whoami and logname reports now:
whoami
logname
  1. use su as follows and execute this privileged command to obtain desired results:
su -c 'firewall-cmd --list-services'

Lab: Add user1 to sudo file but only for the cat command.

  1. Open up /etc/sudoers and add the following:
user1 ALL=/usr/bin/cat
  1. run cat as user1 with and without sudo:
cat /etc/sudoers
sudo cat /etc/sudoers

Lab: Add user and command aliases to the sudoer file.

  1. Add the following to the bottom of the sudoers file:
Cmnd_Alias PKGCMD = /usr/bin/yum, /usr/bin/rpm
User_Alias PKGADM = user1, user100, user200 
PKGADM ALL=PKGCMD
  1. Run rpm or yum with sudo as one of the users.
sudo yum 

Lab: Take a look at examples in the sudoers file.

cat /etc/sudoers

Lab: Viewing owner and group information

  1. Create a file file1 as user1 in their home directory and exhibit the file’s long listing:
touch file1
ls -l file1
  1. View the corresponding UID and GID instead, you can specify the -n option with the command:
ls -ln file1

Lab: Modify File Owner and Owning Group

  1. Change into the /tmp directory and create file10 and dir10:
cd /tmp
touch file10
mkdir dir10
  1. Check and validate that both attributes are set to user1:
ls -l file10
ls -ld dir10
  1. Set the ownership of file10 to user100 and confirm:
sudo chown user100 file10
ls -l file10
  1. Alter the owning group to dba and verify:
sudo chgrp dba file10
ls -l file10
  1. Change the ownership to user200 and owning group to user100 and confirm:
sudo chown user200:user100 file10
  1. Modify the ownership to user200 and owning group to dba recursively on dir10 and validate:
sudo  chown -R user200:dba dir10
ls -ld dir10

Lab: Create User and Configure Password Aging (root)

  1. Create group lnxgrp with GID 6000.
groupadd lnxgrp -g 6000
  1. Create user user5000 with UID 5000 and GID 6000. Assign this user a password.
useradd -u 5000 -g 6000 user5000
  1. Establish password aging attributes so that this user cannot change their password within 4 days after setting it and with a password validity of 30 days. This user should start getting warning messages for changing password 10 days prior to account lock down.
chage -m 4 -M 30 -W 10 user5000
  1. This user account needs to expire on the 20th of December, 2021.
chage -E 2021-12-20 user5000

Lab 6-2: Lock and Unlock User (root)

  1. Lock the user account for user5000 using the passwd command, and
passwd -l user5000
  1. confirm by examining the change in the /etc/shadow file.
cat /etc/shadow
  1. Try to log in with user5000 and observe what happens.
su - user1
su - user5000
  1. Use the usermod command and unlock

Basic User Management

Listing Logged-In Users

A list of the users who have successfully signed on to the system with valid credentials can be printed using who and w

who command

  • references the /run/utmp file and displays the information.
  • displays login name of user
  • shows terminal session device filename
  • pts stands for pseudo terminal session
  • shows data and time of user login
  • Shows if terminal session is graphical(:0), remote(IP address), or textual on the console

what command (w)

  • Shows length of time the user has been idle
  • CPU time used by all processes including any existing background jobs attached to this terminal (JCPU),
  • CPU time used by the current process (PCPU),
  • current activity (WHAT).
  • current system time
  • system up duration
  • number of users logged in
  • cpu averages over last 1, 5, and 15 minutes
  • load average (CPU load): 0.00 and 1.00 correspond to no load and full load, and a number greater than 1.00 signifies excess load (over 100%).

last command

  • Reports the history of successful user login attempts and system boots
  • Consults the wtmp file located in the /var/log directory.
  • wtmp keeps a record of login/logout activities
    • login time
    • duration a user stayed logged in
    • tty
  • Output
    • Login name
    • Terminal name
    • Terminal name or IP from where connection was established
    • Day, Month, date, and time when the connection was established
    • Log out time or (still logged in)
    • Duration of session
    • Action name (system reboots section)
    • Activity name (system reboots section)
    • Linux kernel version (system reboots section)
    • Day, Month, date, and time when the reboot command was issued (system reboots section)
    • System restart time (system reboots section)
    • Duration the system remained down or (still running) (system reboots section)
    • log filename (wtmp) (last line)

lastb command

  • reports failed login attempts
  • Consults /var/log/btmp
    • record of failed login attempts
    • login name
    • time
    • tty
  • Must be root to run this command
  • Columns
    • name of user
    • protocol used
    • terminal name or ip address
    • Day, Month, Date, and time of the attempt
    • Duration the attempt was tried
    • Duration the attempt last for
    • log filename (btmp) (last line)

lastlog command

  • most recent login evidence info for every user account that exists on the system
  • Consults /var/log/lastlog
    • record of most recent user attempts
    • login name
    • time
    • port (or tty)
    • Columns:
      • Login name of user
      • Terminal name assigned upon Logging in
      • Terminal name or Ip address from where the session was initiated
      • Timestamp for the latest login or “Never logged in”
    • service accounts are used by their respective services, and they are not meant for logging.

id (identifier) Command

  • displays the calling user’s:
    • UID (User IDentifier)
    • username
    • GID (Group IDentifier)
    • group name
    • all secondary groups the user is a member of
    • SELinux security context

groups Command:

  • lists all groups the calling user is a member of:
  • first group listed is the primary group for the user who executed this command
  • other groups are secondary (or supplementary).
  • can also view group membership information for a different user.

User Account Management

useradd Command

  • add a new user to the system
  • adds entries to the four user authentication files for each account added to the system
  • creates a home directory for the user and copies the default user startup files from the skeleton directory /etc/skel into the user’s home directory
  • used to update the default settings that are used at the time of new user creation for unspecified settings
  • Options
    • -b (–base-dir)
      • Defines the absolute path to the base directory for placing user home directories. The default is /home.
    • -c (–comment)
      • Describes useful information about the user.
    • -d (–home-dir)
      • Defines the absolute path to the user home directory.
    • -D (–defaults)
      • Displays the default settings from the /etc/default/useradd file and modifies them.
    • -e (–expiredate)
      • Specifies a date on which a user account is automatically disabled. The format for the date specification is YYYY-MM-DD.
    • -f (–inactive)
      • Denotes maximum days of inactivity between password expiry and permanent account disablement.
    • -g (–gid)
      • Specifies the primary GID. Without this option, a group account matching the username is created with the GID matching the UID.
    • -G (–groups)
      • Specifies the membership to supplementary groups.
    • -k (–skel)
      • location of the skeleton directory (default is /etc/skel) (stores default user startup files)
        • These files are copied to the user’s home directory at the time of account creation.
        • Three hidden bash shell files: (default)
          • .bash_profile, .bashrc, and .bash_logout
          • You can customize these files or add your own to be used for accounts created thereafter.
    • -m (–create-home)
      • Creates a home directory if it does not already exist.
    • -o (–non-unique)
      • Creates a user account sharing the UID of an existing user.
      • When two users share a UID, both get identical rights on each other’s files.
      • Should only be done in specific situations.
    • -r (–system)
      • Creates a service account with a UID below 1000 and a never-expiring password.
    • -s (–shell)
      • Defines the absolute path to the shell file. The default is /bin/bash.
    • -u (–uid)
      • Indicates a unique UID. Without this option, the next available UID from the /etc/passwd file is used.
    • login
      • Specifies a login name to be assigned to the user account.

usermod Command

  • modify the attributes of an existing user
  • similar syntax to useradd and most switches identical.
  • Options unique to usermod:
    • -a (–append)
      • Adds a user to one or more supplementary groups
    • -l (–login)
      • Specifies a new login name
    • -m (–move-home)
      • Creates a home directory and moves the content over from the old location
    • -G
      • Add a list of groups a user is a member of.

userdel Command

  • to remove a user from the system

passwd Command

  • set or modify a user’s password

No-Login (Non-Interactive) User Account

nologin command

  • /sbin/nologin
  • special purpose program that can be employed for user accounts that do not require login access to the system.
  • located in the /usr/sbin (or /sbin) directory
  • user is refused with the message, “This account is currently not available.”
  • If a custom message is required, you can create a file called nologin.txt in the /etc directory and add the desired text to it.
  • If a no-login user is able to log in with their credentials, there is a problem. Use the grep command against the /etc/passwd file to ensure ‘/sbin/nologin’ is there in the shell field for that user.
  • examples of user accounts that do not require login access are the service accounts such as ftp, apache, and sshd.

Basic User Management Labs

Lab: who

 who

Lab: what

 w

Lab: last

  1. List all user login, logout, and system reboot occurrences:
 last
  1. List system reboot info only:
 last reboot

Lab: lastb

 lastb

Lab: lastlog

 lastlog

Lab: id

  1. View info about currently active user:
 id
  1. View info about another user:
 id user1

Lab: groups

  1. View current user’s groups:
 groups
  1. View groups of another user:
 groups user1

Lab: user authentication files

  1. list of the four files and their backups from the /etc directory:
 ls -l /etc/passwd* /etc/group* /etc/shadow* /etc/gshadow*
  1. View first and last 3 lines of the passwd file
 head -3 /etc/passwd ; tail -3 /etc/passwd
  1. verify the permissions and ownership on the passwd file:
 ls -l /etc/passwd
  1. View first and last 3 lines of the shadow file:
 head -3 /etc/shadow ; tail -3 /etc/shadow
  1. verify the permissions and ownership on the shadow file:
 ls -l /etc/shadow
  1. View first and last 3 lines of the group file:
 head -3 /etc/group ; tail -3 /etc/group
  1. Verify the permissions and ownership on the group file:
 ls -l /etc/group
  1. View first and last 3 lines of the gshadow file:
 head -3 /etc/gshadow ; tail -3 /etc/gshadow
  1. Verify the permissions and ownership on the gshadow file:
 ls -l /etc/gshadow

Lab: useradd and login.defs

  1. use the cat or less command to view the useradd file content or display the settings with the useradd command:
 useradd -D
  1. grep on the/etc/login.defs with uncommented and non-empty lines:
 grep -v ^# /etc/login.defs | grep -v ^$

Lab: Create a User Account with Default Attributes (root)

  1. Create user2 with all the default directives:
 useradd user2
  1. Assign this user a password and enter it twice when prompted:
 passwd user2
  1. grep for user2: on the authentication files to examine what the useradd command has added:
 cd /etc ; grep user2: passwd shadow group gshadow
  1. Test this new account by logging in as user2 and then run the id and groups commands to verify the UID, GID, and group membership information:
 su - user2
 id
 groups

Lab: Create a User Account with Custom Values

  1. Create user3 with UID 1010, home directory /usr/user3a, and shell /bin/sh:
 useradd -u 1010 -d /usr/user3a -s /bin/sh user3
  1. Assign user1234 as password (passwords assigned in the following way is not recommended; however, it is okay in a lab environment):
 echo user1234 | passwd --stdin user3
  1. grep for user3: on the four authentication files to see what was added for this user:
 cd /etc ; grep user3: passwd shadow group gshadow 
  1. Test this account by switching to or logging in as user3 and entering user1234 as the password. Run the id and groups commands for further verification.
 su - user3 
 id
 groups

Lab: Modify and Delete a User Account

  1. Modify the login name for user2 to user2new, UID to 2000, home directory to /home/user2new, and login shell to /sbin/nologin.
 usermod -l user2new -m -d /home/user2new -s /sbin/nologin -u 2000 user2
  1. Obtain the information for user2new from the passwd file for confirmation:
 grep user2new /etc/passwd
  1. Remove user2new along with their home and mail spool directories:
 userdel -r user2new
  1. Confirm the user deletion:
 grep user2new /etc/passwd

Lab: Create a User Account with No-Login Access (root)

  1. Look at the current nologin users:
 grep nologin /etc/passwd
  1. Create user4 with non-interactive shell file /sbin/nologin:
 useradd -s /sbin/nologin user4
  1. Assign user1234 as password:
 echo user1234 | passwd --stdin user4
  1. grep for user4 on the passwd file and verify the shell field containing the nologin shell:
 grep user4 /etc/passwd
  1. Test this account by attempting to log in or switch:
 su - user4

Lab: Check User Login Attempts (root)

  1. execute the last, lastb, and lastlog commands, and observe the outputs.
 last
 lastb
 lastlog
  1. List the timestamps when the system was last rebooted.
 last | grep reboot

Lab 5-2: Verify User and Group Identity (user1)

  1. run the who and w commands one at a time, and compare the outputs.
 who
 w
  1. Execute the id and groups commands, and compare the outcomes. Examine the extra information that the id command shows, but not the groups command.
 id
 groups

Lab 5-3: Create Users (root)

  1. create user account user4100 with UID 4100 and home directory under /usr.
 useradd -m -d /usr/user4100 -u 4100 user4100 
  1. Create another user account user4200 with default attributes.
 useradd user4200
  1. Assign both users a password.
 passwd user4100
 passwd user4200
  1. View the contents of the passwd, shadow, group, and gshadow files, and observe what has been added for the two new users.
 cat /etc/passwd
 cat /etc/shadow
 cat /etc/group
 cat /etc/gshadow

Lab: Create User with Non-Interactive Shell (root)

  1. Create user account user4300 with the disability of logging in.
 useradd -s /sbin/nologin user4300
  1. Assign this user a password.
 passwd user4300
  1. Try to log on with this user and see is displayed on the screen.
 su - user4300
  1. View the content of the passwd file, and see what is there that prevents this user from logging in.
 cat /etc/passwd

Subsections of Virtualization

RHCSA Vagrant Lab Setup

We are going to use Vagrant to set up two RHEL 8 servers with some custom configuration options. I will include some helpful Vagrant commands at the end if you get stuck.

In this guide, I will be using Fedora 38 as my main operating system. I use Fedora because it is similar in features to Red Hat Linux Distributions. This will give me even more practice for the RHCSA exam as I use it in day-to-day operations.

Note, if you are using Windows, you will need to install ssh. This can be done by installing Git. Which automatically installs ssh for you.

You will also need to have the latest version of Virtualbox installed.

Here are the steps:

  1. Download and install Vagrant
  2. Make a new directory for your vagrant lab to live in
  3. Add the vagrant box
  4. Install the Vagrant disk size plugin
  5. Initialize the Vagrant box and Edit the Vagrant file
  6. Bring up the Vagrant box

1. Download and install Vagrant.

In Fedora, this is very easy. Run the following command to download and install Vagrant:

sudo dnf install vagrant

2. Make a new directory for your vagrant lab to live in.

Make your vagrant directory and make it your current working directory:

cd Vagrant
  1. Add the Vagrant box. vagrant box add generic/rhel8

  2. Install the Vagrant disk size plugin. The disk size program will help us set up custom storage sizes. Since we will be re-partitioning storage, this is a useful feature.

vagrant plugin install vagrant-disksize

  1. Initialize the Vagrant box and edit the Vagrant file. First, initialize the Vagrant box in the vagrant directory:

vagrant init generic/rhel8

After completion, there will now be a file called “Vagrantfile” in your current directory. Since Vim is on the RHCSA exam, it’s wise to practice with it whenever you can. So let’s open the file in Vim:

vim Vagrantfile

You will see a bunch of lines commented out, and a few lines without comments. Go ahead and comment out everything and paste this at the end of the file:

Vagrant.configure("2") do |config


config.vm.box = "generic/rhel8"


config.vm.define "server1" do |server1|


server1.vm.hostname = "server1.example.com"


server1.vm.network "private_network", ip: "192.168.2.110"


config.disksize.size = '10GB'


end


config.vm.define "server2" do |server2|


server2.vm.hostname = "server2.example.com"


server2.vm.network "private_network", ip: "192.168.2.120"


config.disksize.size = '16GB'


end


config.vm.provider "virtualbox" do |vb|


vb.memory = "2048"


end


end|

The configuration file is fairly self-explanatory. Save Vagrantfile and exit Vim. Then, create /etc/vbox/networks.conf and add the following:

* 10.0.0.0/8 192.168.0.0/1
* 2001::/646

This will allow you to be more flexible with what network addresses can be used in VirtualBox.

Bring up the Vagrant box.

Now, we bring up the Vagrant box. This will open two Virtual machines in Virtualbox named server1 and server2 in headless mode (there is no GUI).

vagrant up

Great! Now we can use Vagrant to ssh into server1:

vagrant ssh server 1

From server1 ssh into server2 using its IP address:

[vagrant@server1 ~]$ ssh 192.168.2.120

Now you are in and ready to stir things up. The last thing you need is some commands to manage your Vagrant machines.

Helpful Vagrant commands.

Shut down Vagrant machines:

vagrant halt Suspend or resume a machine:

vagrant suspend
vagrand resume

Restart a virtual machine:

vagrant reload

Destroy a Vagrant machine:

vagrant destroy [machine-name]

Show running VMs:

vagrant status

List other Vagrant options:

vagrant

If you are going for RHCSA, there is no doubt that you will also use Vagrant sometime in the future. And as you can see, it’s pretty quick and simple to get started.

Feel free to reach out with questions.