Ansible

Ad Hoc Ansible Commands

Getting started with some simple Ansible commands

Ansible Documentation

Ansible Documentation resources

Ansible Inventory and Ansible.cfg

Setting up and Ansible environment

Ansible Playbooks

Getting started with Ansible Playbooks

Building an Ansible lab with Ansible

Building an Ansible lab with Ansible using Ansible and Libvirt

Common modules with examples

Common Ansible modules with examples

Networking with Ansible

Networking with Ansible

Setting up an Ansible Lab

Basic Ansible lab for RHCE

Variables and Facts

All about Ansible variables and facts

Subsections of Ansible

Ad Hoc Ansible Commands

Building off our lab, we need a playbook that give instructions for getting managed nodes to their desired states. Playbooks are scripts written in YAML. There are some things you need to know when working with playbooks:

  • Ad Hoc Commands
  • Modules
  • Module Documentation
  • Ad Hoc commands from bash scripts

Ad Hoc Commands

Ad hoc commands are ansible tasks you can run against managed hosts without the need of a playbook or script. These are used for bringing nodes to their desired states, verifying playbook results, and verifying nodes meet any needed criteria/pre-requisites. These must be ran as the ansible user (whatever your remote_user directive is set to under [defaults] in ansible.cfg)

Run the user module with the argument name=lisa on all hosts to make sure the user “lisa” exists. If the user doesn’t exist, it will be created on the remote system: ansible all -m user -a "name=lisa"

{command} {host} -m {module} -a {"argument1 argument2 argument3"}

In our lab:

[ansible@control base]$ ansible all -m user -a "name=lisa"
web1 | UNREACHABLE! => {
    "changed": false,
    "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname web1: Name or service not known",
    "unreachable": true
}
web2 | UNREACHABLE! => {
    "changed": false,
    "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname web2: Name or service not known",
    "unreachable": true
}
ansible1 | CHANGED => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "changed": true,
    "comment": "",
    "create_home": true,
    "group": 1001,
    "home": "/home/lisa",
    "name": "lisa",
    "shell": "/bin/bash",
    "state": "present",
    "system": false,
    "uid": 1001
}
ansible2 | CHANGED => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "changed": true,
    "comment": "",
    "create_home": true,
    "group": 1001,
    "home": "/home/lisa",
    "name": "lisa",
    "shell": "/bin/bash",
    "state": "present",
    "system": false,
    "uid": 1001
}

This Ad Hoc command created user “Lisa” on ansible1 and ansible2. If we run the command again, we get “SUCCESS” on the first line instead of “CHANGED”. Which means the hosts already meet the requirements:

[ansible@control base]$ ansible all -m user -a "name=lisa"
web2 | UNREACHABLE! => {
    "changed": false,
    "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname web2: Name or service not known",
    "unreachable": true
}
web1 | UNREACHABLE! => {
    "changed": false,
    "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname web1: Name or service not known",
    "unreachable": true
}
ansible2 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "append": false,
    "changed": false,
    "comment": "",
    "group": 1001,
    "home": "/home/lisa",
    "move_home": false,
    "name": "lisa",
    "shell": "/bin/bash",
    "state": "present",
    "uid": 1001
}
ansible1 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "append": false,
    "changed": false,
    "comment": "",
    "group": 1001,
    "home": "/home/lisa",
    "move_home": false,
    "name": "lisa",
    "shell": "/bin/bash",
    "state": "present",
    "uid": 1001
}

indempotent Regardless of current condition, the host is brought to the desired state. Even if you run the command multiple times.

Run the command id lisa on all managed hosts:

[ansible@control base]$ ansible all -m command -a "id lisa"
web1 | UNREACHABLE! => {
    "changed": false,
    "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname web1: Name or service not known",
    "unreachable": true
}
web2 | UNREACHABLE! => {
    "changed": false, module should you use to run the rpm -qa | grep httpd command?


    "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname web2: Name or service not known",
    "unreachable": true
}
ansible1 | CHANGED | rc=0 >>
uid=1001(lisa) gid=1001(lisa) groups=1001(lisa)
ansible2 | CHANGED | rc=0 >>
uid=1001(lisa) gid=1001(lisa) groups=1001(lisa)

Here, the command module is used to run a command on the specified hosts. And the output is displayed on screen. TO note, this does not show up in our ansible user’s command history on the host:

[ansible@ansible1 ~]$ history
    1  history

Remove the userLlisa from all managed hosts:

[ansible@control base]$ ansible all -m user -a "name=lisa state=absent"
web2 | UNREACHABLE! => {
    "changed": false,
    "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname web2: Name or service not known",
    "unreachable": true
}
web1 | UNREACHABLE! => {
    "changed": false,
    "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname web1: Name or service not known",
    "unreachable": true
}
ansible1 | CHANGED => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "changed": true,
    "force": false,
    "name": "lisa",
    "remove": false,
    "state": "absent"
}
ansible2 | CHANGED => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "changed": true,
    "force": false,
    "name": "lisa",
    "remove": false,
    "state": "absent"
}
[ansible@control base]$ ansible all -m command -a "id lisa"
web1 | UNREACHABLE! => {
    "changed": false,
    "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname web1: Name or service not known",
    "unreachable": true
}
web2 | UNREACHABLE! => {
    "changed": false,
    "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname web2: Name or service not known",
    "unreachable": true
}
ansible1 | FAILED | rc=1 >>
id: ‘lisa’: no such usernon-zero return code
ansible2 | FAILED | rc=1 >>
id: ‘lisa’: no such usernon-zero return code

You can also use the -u option to specify the user that Ansible will use to run the command. Remember, with no modules specified, ansible uses the command module: ansible all -a "free -m" -u david

Modules

There are more than 3000 Ansible modules available for a variety of different tasks and servers. The more modules you know, the better you will be at Ansible. They are essentially plug ins written in Python that can be used in playbooks or Ad Hoc commands. Make sure to use modules that are the most specific to the task you are trying to accomplish.

Important modules

Arbitrary Modules

Limit your use of these, it’s hard to track what has been changed using these modules. Use the more specific indempotent module for the task instead, if you can.

command Runs arbitrary commands (not using the shell). Shell stuff such as pipes and redirects will not work with this module. This is the default module if no modules are specified. You can set different default module in ansible.crf by using “module_name = module”. A python script is generated on the manage host and executed. ansible all -m command -a "rpm -qa | grep httpd" (the pipe gets ignored)

Check status of httpd: ansible all -m command -a "systemctl status httpd"

shell Same as above but with the shell. So pipes and redirects will work. A python script is generated on the manage host and executed. ansible all -m shell -a "rpm -qa | grep httpd" (the pipe is not ignored)

raw Runs arbitrary command on top of SSH without using Python. Good for managed hosts that don’t have Python. Or install Python during setup: ansible -u root -i inventory ansible3 --ask-pass -m raw -a "yum install python3

Indempotent modules

These are easier to track and guarantee indempotency.

copy Copy files or lines of text to files `ansible all -m copy -a ‘content=“hello world” dest=/etc/motd’

yum Manage packages on RHEL hosts. Can use the package module to install packages on any Linux distro. Use the yum module if you need specific yum features. And package module if you need to manage software on different distros.

Install latest version of nmap: `ansible all -m yum -a “name=nmap state=latest”

List httpd details: ansible all -m yum -a "list=httpd"

service Manage state of systemd and system-V services. Make sure to use enabled=yes and state=started to make sure services are enabled at startup. ansible -m service -a "name=httpd state=started enabled=yes"

ping Checks if managed hosts are in a manageable state. ansible all -m ping

Viewing available modules with ansible-doc

As noted before, there are over 3,000 modules that come with Ansible. These are installed on your system when you install Ansible. View all the modules available like so: ansible-doc -l

Filter to get more specific results:

[ansible@control ~]$ ansible-doc -l | grep package
ansible.builtin.apt                    Manages apt-packages                
ansible.builtin.debconf                Configure a .deb package            
ansible.builtin.dnf                    Manages packages with the `dnf' pack...
ansible.builtin.dpkg_selections        Dpkg package selection selections   
ansible.builtin.package                Generic OS package manager          
ansible.builtin.package_facts          Package information as facts        
ansible.builtin.yum                    Manages packages with the `yum' pack...

Finding details on a specific module: ansible-doc ping

Output shows the module name, maintainter information, options available, related modules, module author, examples, and return values.

Each module is a Python script on your system that you can view if you want to see what is going on under the hood: > ANSIBLE.BUILTIN.PING (/usr/lib/python3.9/site-packages/ansible/modules/ping.py)

Make sure you read the modules description for details!

A trivial test module, this module always returns `pong' on successful contact. It does not make sense in playbooks,
        but it is useful from `/usr/bin/ansible' to verify the ability to login and that a usable Python is configured. This
        is NOT ICMP ping, this is just a trivial test module that requires Python on the remote-node. For Windows targets, use
        the [ansible.windows.win_ping] module instead. For Network targets, use the [ansible.netcommon.net_ping] module
        instead.

Note that mandatory option are listed as =option instead of -option.

OPTIONS (= is mandatory):

- data
        Data to return for the `ping' return value.
        If this parameter is set to `crash', the module will cause an exception.
        default: pong
        type: str

And don’t forget to check the “SEE ALSO” section to see if there could be a module that better suits your needs:

SEE ALSO:
      * Module ansible.netcommon.net_ping
      * Module ansible.windows.win_ping

Here are some examples from the raw module doc:

EXAMPLES:

- name: Bootstrap a host without python2 installed
  ansible.builtin.raw: dnf install -y python2 python2-dnf libselinux-python

- name: Run a command that uses non-posix shell-isms (in this example /bin/sh doesn't handle redirection and wildcards together but bash does)
  ansible.builtin.raw: cat < /tmp/*txt
  args:
    executable: /bin/bash

- name: Safely use templated variables. Always use quote filter to avoid injection issues.
  ansible.builtin.raw: "{{ package_mgr|quote }} {{ pkg_flags|quote }} install {{ python|quote }}"

- name: List user accounts on a Windows system
  ansible.builtin.raw: Get-WmiObject -Class Win32_UserAccount

The examples show the playbook code for common use cases for running the module. Use the -s flag to show the playbook snippet only:

[ansible@control ~]$ ansible-doc -s service
- name: Manage services
  service:
      arguments:             # Additional arguments provided on the command line. While using remote hosts with systemd this setting will be ignored.
      enabled:               # Whether the service should start on boot. *At least one of state and enabled are required.*
      name:                  # (required) Name of the service.
      pattern:               # If the service does not respond to the status command, name a substring to look for as would be found in the output of the `ps'
                             # command as a stand-in for a status result. If the string is found, the service will be assumed to
                             # be started. While using remote hosts with systemd this setting will be ignored.
      runlevel:              # For OpenRC init scripts (e.g. Gentoo) only. The runlevel that this service belongs to. While using remote hosts with systemd
                             # this setting will be ignored.
      sleep:                 # If the service is being `restarted' then sleep this many seconds between the stop and start command. This helps to work around
                             # badly-behaving init scripts that exit immediately after signaling a process to stop. Not all
                             # service managers support sleep, i.e when using systemd this setting will be ignored.
      state:                 # `started'/`stopped' are idempotent actions that will not run commands unless necessary. `restarted' will always bounce the
                             # service. `reloaded' will always reload. *At least one of state and enabled are required.* Note
                             # that reloaded will start the service if it is not already started, even if your chosen init
                             # system wouldn't normally.
      use:                   # The service module actually uses system specific modules, normally through auto detection, this setting can force a specific
                             # module. Normally it uses the value of the 'ansible_service_mgr' fact and falls back to the old
                             # 'service' module when none matching is found.

The official Ansible documentation will also be available during the RHCE exam: https://docs.ansible.com/

The docs will also show you how to install additional module collections. To install the posix collection: [ansible@control base]$ ansible-galaxy collection install ansible.posix

Ad hoc commands in Scripts

Follow normal bash scripting guidelines to run ansible commands in a script:

[ansible@control base]$ vim httpd-ansible.sh

Let’s set up a script that installs and starts/enables httpd, creates a user called “anna”, and copies the ansible control node’s /etc/hosts file to /tmp/ on the managed nodes:

#!/bin/bash

ansible all -m yum -a "name=httpd state=latest"
ansible all -m service -a "name=httpd state=started enabled=yes"
ansible all -m user -a "name=anna"
ansible all -m copy -a "src=/etc/hosts dest=/tmp/hosts"
[ansible@control base]$ chmod +x httpd-ansible.sh
[ansible@control base]$ ./httpd-ansible.sh 
web2 | UNREACHABLE! => {
    "changed": false,
    "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname web2: Name or service not known",
    "unreachable": true
}
web1 | UNREACHABLE! => {
    "changed": false,
    "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname web1: Name or service not known",
    "unreachable": true
}
ansible1 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "changed": false,
    "msg": "Nothing to do",
    "rc": 0,
    "results": []
}
ansible2 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "changed": false,
    "msg": "Nothing to do",
    "rc": 0,
    "results": []
}
... <-- Results truncated

And from the ansible1 node we can verify:

[ansible@ansible1 ~]$ cat /etc/passwd | grep anna
anna:x:1001:1001::/home/anna:/bin/bash
[ansible@ansible1 ~]$ cat /tmp/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.124.201 ansible1
192.168.124.202 ansible2

View a file from a managed node: ansible ansible1 -a "cat /somfile.txt"

Ansible Inventory and Ansible.cfg

Ansible projects

For small companies, you can use a single Ansible configuration. But for larger ones, it’s a good idea to use different project directories. A project directory contains everything you need to work on a single project. Including:

  • playbooks
  • variable files
  • task files
  • inventory files
  • ansible.cfg

playbook An Ansible script written in YAML that enforce the desired configuration on manage hosts.

Inventory

A file that Identifies hosts that Ansible has to manage. You can also use this to list and group hosts and specify host variables. Each project should have it’s own inventory file. /etc/ansible/hosts can be used for system wide inventory. Which is the default if no inventory file is specified. (That file also has some basic inventory formatting info if you forget) Ansible will also target localhosts if no hosts are found in the inventory file. It’s a good idea to store inventory files in large environments in their own project folders.

localhost is not defined in inventory. It is an implicit hosts that is usable and refers to the Ansible control machine. Using localhost can be a good way to verify the accessibility of services on managed hosts.

Listing hosts

List hosts by IP address or hostname. You can list a range of hosts in an inventory file as well such as web-server[1:10].example.com

ansible1:2222 < specify ssh port if the host is not using the default port 22
ansible2
10.0.10.55
web-server[1:10].example.com

Listing groups

You can list groups and groups of groups. See the groups web and db are included in the group “servers:children”

ansible1
ansible2
10.0.10.55
web-server[1:10].example.com

[web]
web-server[1:10].example.com

[db]
db1
db2

[servers:children] <-- servers is the group of groups and children is the parameter that specifies child groups
web
db

There are 3 general approaches to using groups:

Functional groups Address a specific group of hosts according to use. Such as web servers or database servers.

Regional host groups Used when working with region oriented infrastructure. Such as USA, Canada.

Staging host groups Used to address different hosts according to the staging phase that the current environment is in. Such as testing, development, production.

Undefined host groups are called implicit host groups. These are all, ungrouped, and localhost. Names making the meaning obvious.

Inventory commands:

To view the inventory, specify the inventory file such as ~/base/inventory in the command line. You can name the inventory file anything you want. You can also set the default in the ansible.cfg file.

View the current inventory: ansible -i inventory <pattern> --list-hosts

List inventory hosts in JSON format: ansible-inventory -i inventory --list

Display overview of hosts as a graph: ansible-inventory -i inventory --graph

In our lab example:

[ansible@control base]$ pwd
/home/ansible/base

[ansible@control base]$ ls
inventory

[ansible@control base]$ cat inventory
ansible1
ansible2

[web]
web1
web2

[ansible@control base]$ ansible-inventory -i inventory --graph
@all:
  |--@ungrouped:
  |  |--ansible1
  |  |--ansible2
  |--@web:
  |  |--web1
  |  |--web2

[ansible@control base]$ ansible-inventory -i inventory --list
{
    "_meta": {
        "hostvars": {}
    },
    "all": {
        "children": [
            "ungrouped",
            "web"
        ]
    },
    "ungrouped": {
        "hosts": [
            "ansible1",
            "ansible2"
        ]
    },
    "web": {
        "hosts": [
            "web1",
            "web2"
        ]
    }
}

[ansible@control base]$ ansible -i inventory all --list-hosts
  hosts (4):
    ansible1
    ansible2
    web1
    web2
    
[ansible@control base]$ ansible -i inventory ungrouped --list-hosts
  hosts (2):
    ansible1
    ansible2

Host variables

In older versions of Ansible you could define variables for hosts. This is no longer used. Example:

[groupname:vars]
ansible=ansible_user

Variables are now set using host_vars and group_vars directories instead.

Dynamic inventory scripts

A script is used to detect inventory hosts so that you do not have to manually enter them. This is good for larger environments. You can find community provided dynamic inventory scripts that come with an .ini file that provides information on how to connect to a resource.

Inventory scripts must include –list and –host options and output must be JSON formatted. Here is an example from sandervanvught that generates an inventory script using /etc/hosts:

[ansible@control base]$ cat inventory-helper.py

#!/usr/bin/python

from subprocess import Popen,PIPE
import sys

try:
     import json
except ImportError:
     import simplejson as json



result = {}

result['all'] = {}



pipe = Popen(['getent', 'hosts'], stdout=PIPE, universal_newlines=True)


result['all']['hosts'] = []

for line in pipe.stdout.readlines():
    s = line.split()
    result['all']['hosts']=result['all']['hosts']+s


result['all']['vars'] = {}


if len(sys.argv) == 2 and sys.argv[1] == '--list':
    print(json.dumps(result))

elif len(sys.argv) == 3 and sys.argv[1] == '--host':
    print(json.dumps({}))

else:
    print("Requires an argument, please use --list or --host <host>")

When ran on our sample lab:

[ansible@control base]$sudo python3 ./inventory-helper.py
Requires an argument, please use --list or --host <host>

[ansible@control base]$ sudo python3 ./inventory-helper.py --list
{"all": {"hosts": ["127.0.0.1", "localhost", "localhost.localdomain", "localhost4", "localhost4.localdomain4", "127.0.0.1", "localhost", "localhost.localdomain", "localhost6", "localhost6.localdomain6", "192.168.124.201", "ansible1", "192.168.124.202", "ansible2"], "vars": {}}}

To use a dynamic inventory script:

[ansible@control base]$ chmod u+x inventory-helper.py 
[ansible@control base]$ sudo ansible -i inventory-helper.py all --list-hosts
[WARNING]: A duplicate localhost-like entry was found (localhost). First found localhost was 127.0.0.1
  hosts (11):
    127.0.0.1
    localhost
    localhost.localdomain
    localhost4
    localhost4.localdomain4
    localhost6
    localhost6.localdomain6
    192.168.124.201
    ansible1
    192.168.124.202
    ansible2

Multiple inventory files

Put all inventory files in a directory and specify the directory as the inventory to be used. For dynamic directories you also need to set the execution bit on the inventory file.

ansible.cfg

You can store this in a project’s directory or a user’s home directory, in the case that multiple user’s want to have their own Ansible configuration. Or in /etc/ansible if the configuration will be the same for every user and every project. You can also specify these settings in Ansible playbooks. The settings in a playbook take precedence over the .cfg file.

ansible.cfg precedence (Ansible uses the first one it finds and ignores the rest.)

  1. ANSIBLE_CONFIG environment variable
  2. ansible.cfg in current directory
  3. ~/.ansible.cfg
  4. /etc/ansible/ansible.cfg

Generate an example config file in the current directory. All directive are commented out by default: [ansible@control base]$ ansible-config init --disabled > ansible.cfg

Include existing plugin to the file: ansible-config init --disabled -t all > ansible.cfg

This generates an extremely large file. So I’ll just show Van Vugt’s example in .ini format:

[defaults] <-- General information
remote_user = ansible <--Required
host_key_checking = false <-- Disable SSH host key validity check
inventory = inventory

[privilege_escalation] <-- Define how ansible user requires admin rights to connect to hosts
become = True <-- Escalation required
become_method = sudo
become_user = root <-- Escalated user
become_ask_pass = False <-- Do not ask for escalation password

Privilege escalation parameters can be specified in ansible.cfg, playbooks, and on the command line.

Ansible Playbooks

  • Exploring playbooks
  • YAML
  • Managing Multiplay Playbooks

Lets create our first playbook:

[ansible@control base]$ vim playbook.yaml

---
- name: install start and enable httpd <-- play is at the highest level
  hosts: all
  tasks: <-- play has a list of tasks
  - name: install package <-- name of task 1
    yum: <-- module
      name: httpd <-- argument 1
      state: installed <-- argument 2
  - name: start and enable service <-- task 2
    service:
      name: httpd
      state: started
      enabled: yes

There are thee dashes at the top of the playbook. And sometimes you’ll find three dots at the end of a playbook. These make it easy to isolate the playbook and embed the playbook code into other projects.

Playbooks are written in YAML format and saved as either .yml or .yaml. YAML specifies objects as key-value pairs (dictionaries). Key value pairs can be listed in either key: value (preferred) or key=value. And dashes specify lists of embedded objects.

There is a collection of one or more plays in a playbook. Each play targets specific hosts and lists tasks to perform on those hosts. There is one play here with the name “install start and enable httpd”. You target the host names to target at the top of the play, not in the individual tasks performed.

Each task is identified by “- name” (not required but recommended for troubleshooting and identifying tasks). Then the module is listed with arguments and their values under that.

Indentation is important here. It identifies the relationships between different elements. Data elements at the same level must have the same indentation. And items that are children or properties of another element must be indented more than their parent elements.

Indentation is created using spaces. Usually two spaces is used, but not required. You cannot use tabs for indentation.

You can also edit your .vimrc file to help with indentation when it detects that you are working with a YAML file: vim ~/.vimrc

autocmd FileType yaml setlocal ai ts=2 sw=2 et

Required elements:

  • hosts - name of host(s) to perform play on
  • name - name of the play
  • tasks - one or more tasks to execute for this play

To run a playbook:

[ansible@control base]$ ansible-playbook playbook.yaml 

# Name of the play
PLAY [install start and enable http+userd] ***********************************************

# Overview of tasks and the hosts it was successful on
TASK [Gathering Facts] **************************************************************
fatal: [web1]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname web1: Name or service not known", "unreachable": true}
fatal: [web2]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname web2: Name or service not known", "unreachable": true}
ok: [ansible1]
ok: [ansible2]

TASK [install package] **************************************************************
ok: [ansible1]
ok: [ansible2]

TASK [start and enable service] *****************************************************
ok: [ansible2]
ok: [ansible1]

# overview of the status of each task
PLAY RECAP **************************************************************************
ansible1                   : ok=3 (no changes required)    changed=0 (indicates the task was successful and target node was modified.)   unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
ansible2                   : ok=3    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
web1                       : ok=0    changed=0    unreachable=1    failed=0    skipped=0    rescued=0    ignored=0   
web2                       : ok=0    changed=0    unreachable=1    failed=0    skipped=0    rescued=0    ignored=0   

Before running tasks, the ansible-playbook command gathers facts (current configuration and settings) about managed nodes.

How to undo playbook modifications

Ansible does not have a built in feature to undo a playbook that you ran. So to undo changes, you need to make another playbook that defines the new desired state of the host.

Working with YAML

Key value pairs can also be listed as:

tasks:
 - name: install vsftpd
   yum: name=vsftpd
 - name: enable vsftpd
   service: name=vsftpd enabled=true
 - name: create readme file

But better to list them as such for better readability:

    copy:
      content: "welcome to the FTP server\n"
      dest: /var/ftp/pub/README
      force: no
      mode: 0444

Some modules support multiple values for a single key:

---
- name: install multiple packages
  hosts: all
  tasks:
  - name: install packages
    yum:
      name: <-- key with multiple values
      - nmap 
      - httpd
      - vsftpd
      state: latest <-- will install and/or update to latest version

YAML Strings

Valid fomats for a string in YAML:

  • super string
  • "super string"
  • 'super string'

When inserting text into a file, you may have to deal with spacing. You can either preserve newline characters with a pipe | such as:

    - name: Using | to preserve newlines
      copy:
        dest: /tmp/rendezvous-with-death.txt
        content: |
          I have a rendezvous with Death
          At some disputed barricade,
          When Spring comes back with rustling shade
          And apple-blossoms fill the air—          

Output:

I have a rendezvous with Death
At some disputed barricade,
When Spring comes back with rustling shade
And apple-blossoms fill the air—

Or chose not to with a carrot >

 - name: Using > to fold lines into one
      copy:
        dest: /tmp/rendezvous-with-death.txt
        content: >
          I have a rendezvous with Death
          At some disputed barricade,
          When Spring comes back with rustling shade
          And apple-blossoms fill the air—          

Output:

I have a rendezvous with Death At some disputed barricade, When Spring comes back with rustling shade And apple-blossoms fill the air—

Checking syntax with --syntax-check

You can use the --syntax-check flag to check a playbook for errors. The ansible-aplaybook command does check syntax by default though, and will throw the same error messages. The syntax check stops after detecting a single error. So you will need to fix the first errors in order to see errors further in the file. I’ve added a tab in front of the host key to demonstrate:

[ansible@control base]$ cat playbook.yaml 
---
- name: install start and enable httpd
    hosts: all
  tasks:
  - name: install package
    yum:
      name: httpd
      state: installed
  - name: start and enable service
    service:
      name: httpd
      state: started
      enabled: yes
      
[ansible@control base]$ ansible-playbook --syntax-check playbook.yaml 
ERROR! We were unable to read either as JSON nor YAML, these are the errors we got from each:
JSON: Expecting value: line 1 column 1 (char 0)

Syntax Error while loading YAML.
  mapping values are not allowed in this context

The error appears to be in '/home/ansible/base/playbook.yaml': line 3, column 10, but may
be elsewhere in the file depending on the exact syntax problem.

The offending line appears to be:

- name: install start and enable httpd
    hosts: all
         ^ here

And here it is again, after fixing the syntax error:

[ansible@control base]$ vim playbook.yaml 
[ansible@control base]$ cat playbook.yaml 
---
- name: install start and enable httpd
  hosts: all
  tasks:
  - name: install package
    yum:
      name: httpd
      state: installed
  - name: start and enable service
    service:
      name: httpd
      state: started
      enabled: yes
[ansible@control base]$ ansible-playbook --syntax-check playbook.yaml 

playbook: playbook.yaml

Doing a dry run

Use the -C flag to perform a dry run. This will check the success status of all of the tasks without actually making any changes. ansible-playbook -C playbook.yaml

Multiple play playbooks

Using multiple plays in a playbook lets you set up one group of servers with one configuration and another group with a different configuration. Each play has it’s own list of hosts to address.

You can also specify different parameters in each play such as become: or the remote_user: parameters.

Try to keep playbooks small. As bigger playbooks will be harder to troubleshoot. You can use include: to include other playbooks. Other than troubleshooting, using smaller playbooks lets you use your playbooks in a flexible way to perform a wider range of tasks.

Here is an example of a playbook with two plays:

---
- name: install start and enable httpd   <--- play 1
  hosts: all
  tasks:
  - name: install package
    yum:
      name: httpd
      state: installed
  - name: start and enable service
    service:
      name: httpd
      state: started
      enabled: yes

- name: test httpd accessibility <-- play 2
  hosts: localhost
  tasks:
  - name: test httpd access
    uri:
      url: http://ansible1

Verbose output options

You can increase the output of verbosity to an amount hitherto undreamt of. This can be useful for troubleshooting.

Verbose output of the playbook above showing task results: [ansible@control base]$ ansible-playbook -v playbook.yaml

Verbose output of the playbook above showing task results and task configuration: [ansible@control base]$ ansible-playbook -vv playbook.yaml

Verbose output of the playbook above showing task results, task configuration, and info about connections to managed hosts: [ansible@control base]$ ansible-playbook -vvv playbook.yaml

Verbose output of the playbook above showing task results, task configuration, and info about connections to managed hosts, plug-ins, user accounts, and executed scripts: [ansible@control base]$ ansible-playbook -vvvv playbook.yaml

Lab playbook

Now we know enough to create and enable a simple webserver. Here is a playbook example. Just make sure to download the posix collection or you won’t be able to use the firewalld module: [ansible@control base]$ ansible-galaxy collection install ansible.posix

[ansible@control base]$ cat playbook.yaml 
---
- name: Enable web server 
  hosts: ansible1
  tasks:
  - name: install package
    yum:
      name: 
        - httpd
        - firewalld
      state: installed
  - name: Create welcome page
    copy:
      content: "Welcome to the webserver!\n"
      dest: /var/www/html/index.html
  - name: start and enable service
    service:
      name: httpd
      state: started
      enabled: yes
  - name: enable firewall
    service: 
      name: firewalld
      state: started
      enabled: true
  - name: Open service in firewall
    firewalld:
      service: http
      permanent: true
      state: enabled
      immediate: yes

- name: test webserver accessibility
  hosts: localhost
  become: no
  tasks:
  - name: test webserver access
    uri:
      url: http://ansible1
      return_content: yes <-- Return the body of the response as a content key in the dictionary result
      status_code: 200 <--

After running this playbook, you should be able to reach the webserver at http://ansible1

With return content and status code

ok: [localhost] => {"accept_ranges": "bytes", "changed": false, "connection": "close", "content": "Welcome to the webserver!\n", "content_length": "26", "content_type": "text/html; charset=UTF-8", "cookies": {}, "cookies_string": "", "date": "Thu, 10 Apr 2025 12:12:37 GMT", "elapsed": 0, "etag": "\"1a-6326b4cfb4042\"", "last_modified": "Thu, 10 Apr 2025 11:58:14 GMT", "msg": "OK (26 bytes)", "redirected": false, "server": "Apache/2.4.62 (Red Hat Enterprise Linux)", "status": 200, "url": "http://ansible1"}

Adds this: "content": "Welcome to the webserver!\n" and this: "status": 200, "url": "http://ansible1"} to verbose output for that task.

Building an Ansible lab with Ansible

When I started studying for RHCE, the study guide had me manually set up virtual machines for the Ansible lab environment. I thought.. Why not start my automation journey right, and automate them using Vagrant.

I use Libvirt to manage KVM/QEMU Virtual Machines and the Virt-Manager app to set them up. I figured I could use Vagrant to automatically build this lab from a file. And I got part of the way. I ended up with this Vagrant file:

Vagrant.configure("2") do |config|
  config.vm.box = "almalinux/9"

  config.vm.provider :libvirt do |libvirt|
    libvirt.uri = "qemu:///system"
    libvirt.cpus = 2
    libvirt.memory = 2048
  end

   config.vm.define "control" do |control|
    control.vm.network "private_network", ip: "192.168.124.200"
    control.vm.hostname = "control.example.com"
  end

  config.vm.define "ansible1" do |ansible1|
    ansible1.vm.network "private_network", ip: "192.168.124.201"
    ansible1.vm.hostname = "ansible1.example.com"

  end

  config.vm.define "ansible2" do |ansible2|
    ansible2.vm.network "private_network", ip: "192.168.124.202"
    ansible2.vm.hostname = "ansible2.example.com"
  end

end

I could run this Vagrant file and Build and destroy the lab in seconds. But there was a problem. The Libvirt plugin, or Vagrant itself, I’m not sure which, kept me from doing a couple important things.

First, I could not specify the initial disk creation size. I could add additional disks of varying sizes but, if I wanted to change the size of the first disk, I would have to go back in after the fact and expand it manually…

Second, the Libvirt plugin networking settings were a bit confusing. When you add the private network option as seen in the Vagrant file, it would add this as a secondary connection, and route everything through a different public connection.

Now I couldn’t get the VMs to run using the public connection for whatever reason, and it seems the only workaround was to make DHCP reservations for the guests Mac addresses which gave me even more problems to solve. But I won’t go there..

So why not get my feet wet and learn how to deploy VMs with Ansible? This way, I would get the granularity and control that Ansible gives me, some extra practice with Ansible, and not having to use software that has just enough abstraction to get in the way.

The guide I followed to set this up can be found on Redhat’s blog here. And it was pretty easy to set up all things considered.

I’ll rehash the steps here:

  1. Download a cloud image
  2. Customize the image
  3. Install and start a VM
  4. Access the VM

Creating the role

Create the project directory mkdir -p kvmlab/roles && cd kvmlab/roles

Initialize the role ansible-galaxy role init kvm_provision

Switch into the role directory cd kvm_provision/

Remove unused directories rm -r files handlers vars

Define variables

Add default variables to main.yml cd defaults/ && vim main.yml

---
# defaults file for kvm_provision
base_image_name: AlmaLinux-9-GenericCloud-9.5-20241120.x86_64.qcow2
base_image_url: https://repo.almalinux.org/almalinux/9/cloud/x86_64/images/{{ base_image_name }}
base_image_sha: abddf01589d46c841f718cec239392924a03b34c4fe84929af5d543c50e37e37
libvirt_pool_dir: "/var/lib/libvirt/images"
vm_name: f34-dev
vm_vcpus: 2
vm_ram_mb: 2048
vm_net: default
vm_root_pass: test123
cleanup_tmp: no
ssh_key: /root/.ssh/id_rsa.pub
# Added option to configure ip address
ip_addr: 192.168.124.250
gw_addr: 192.168.124.1
# Added option to configure disk size
vm_disksize: 20

Defining a VM template

The community.libvirt.virt module is used to provision a KVM VM. This module uses a VM definition in XML format with libvirt syntax. You can dump a VM definition of a current VM and then convert it to a template from there. Or you can just use this:

cd templates/ && vim vm-template.xml.j2

<domain type='kvm'>
  <name>{{ vm_name }}</name>
  <memory unit='MiB'>{{ vm_ram_mb }}</memory>
  <vcpu placement='static'>{{ vm_vcpus }}</vcpu>
  <os>
    <type arch='x86_64' machine='pc-q35-5.2'>hvm</type>
    <boot dev='hd'/>
  </os>
  <cpu mode='host-model' check='none'/>
  <devices>
    <emulator>/usr/bin/qemu-system-x86_64</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='{{ libvirt_pool_dir }}/{{ vm_name }}.qcow2'/>
      <target dev='vda' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
       <!-- Added: Specify the disk size using a variable -->
      <size unit='GiB'>{{ disk_size }}</size>
    </disk>
    <interface type='network'>
      <source network='{{ vm_net }}'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
    </interface>
    <channel type='unix'>
      <target type='virtio' name='org.qemu.guest_agent.0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <channel type='spicevmc'>
      <target type='virtio' name='com.redhat.spice.0'/>
      <address type='virtio-serial' controller='0' bus='0' port='2'/>
    </channel>
    <input type='tablet' bus='usb'>
      <address type='usb' bus='0' port='1'/>
    </input>
    <input type='mouse' bus='ps2'/>
    <input type='keyboard' bus='ps2'/>
    <graphics type='spice' autoport='yes'>
      <listen type='address'/>
      <image compression='off'/>
    </graphics>
    <video>
      <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
    </video>
    <memballoon model='virtio'>
      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
    </memballoon>
    <rng model='virtio'>
      <backend model='random'>/dev/urandom</backend>
      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
    </rng>
  </devices>
</domain>

The template uses some of the variables from earlier. This allows flexibility to changes things by just changing the variables.

Define tasks for the role to perform

cd ../tasks/ && vim main.yml

---
# tasks file for kvm_provision

# ensure the required package dependencies `guestfs-tools` and `python3-libvirt` are installed. This role requires these packages to connect to `libvirt` and to customize the virtual image in a later step. These package names work on Fedora Linux. If you're using RHEL 8 or CentOS, use `libguestfs-tools` instead of `guestfs-tools`. For other distributions, adjust accordingly.

- name: Ensure requirements in place
  package:
    name:
      - guestfs-tools
      - python3-libvirt
    state: present
  become: yes

# obtain a list of existing VMs so that you don't overwrite an existing VM on accident. uses the `virt` module from the collection `community.libvirt`, which interacts with a running instance of KVM with `libvirt`. It obtains the list of VMs by specifying the parameter `command: list_vms` and saves the results in a variable `existing_vms`. `changed_when: no` for this task to ensure that it's not marked as changed in the playbook results. This task doesn't make any change in the machine; it only checks the existing VMs. This is a good practice when developing Ansible automation to prevent false reports of changes.
- name: Get VMs list
  community.libvirt.virt:
    command: list_vms
  register: existing_vms
  changed_when: no

#execute only when the VM name the user provides doesn't exist. And uses the module `get_url` to download the base cloud image into the `/tmp` directory
- name: Create VM if not exists
  block:
  - name: Download base image
    get_url:
      url: "{{ base_image_url }}"
      dest: "/tmp/{{ base_image_name }}"
      checksum: "sha256:{{ base_image_sha }}"
      
# copy the file to libvirt's pool directory so we don't edit the original, which can be used to provision other VMS later
  - name: Copy base image to libvirt directory
    copy:
      dest: "{{ libvirt_pool_dir }}/{{ vm_name }}.qcow2"
      src: "/tmp/{{ base_image_name }}"
      force: no
      remote_src: yes 
      mode: 0660
    register: copy_results
  - 
# Resize the VM disk
  - name: Resize VM disk
    command: qemu-img resize "{{ libvirt_pool_dir }}/{{ vm_name }}.qcow2" "{{ disk_size }}G"
    when: copy_results is changed

# uses command module to run virt-customize to customize the image
  - name: Configure the image
    command: |
      virt-customize -a {{ libvirt_pool_dir }}/{{ vm_name }}.qcow2 \
      --hostname {{ vm_name }} \
      --root-password password:{{ vm_root_pass }} \
      --ssh-inject 'root:file:{{ ssh_key }}' \
      --uninstall cloud-init --selinux-relabel      
# Added option to configure an IP address
      --firstboot-command "nmcli c m eth0 con-name eth0 ip4 {{ ip_addr }}/24 gw4 {{ gw_addr }} ipv4.method manual && nmcli c d eth0 && nmcli c u eth0"

    when: copy_results is changed

  - name: Define vm
    community.libvirt.virt:
      command: define
      xml: "{{ lookup('template', 'vm-template.xml.j2') }}"
    when: "vm_name not in existing_vms.list_vms"

- name: Ensure VM is started
  community.libvirt.virt:
    name: "{{ vm_name }}"
    state: running
  register: vm_start_results
  until: "vm_start_results is success"
  retries: 15
  delay: 2

- name: Ensure temporary file is deleted
  file:
    path: "/tmp/{{ base_image_name }}"
    state: absent
  when: cleanup_tmp | bool

Changed my user to own the libvirt directory: chown -R david:david /var/lib/libvirt/images

Create a VM with a new name ansible-playbook -K kvm_provision.yaml -e vm=ansible1

–run-command ’nmcli c a type Ethernet ifname eth0 con-name eth0 ip4 192.168.124.200 gw4 192.168.124.1'

parted /dev/vda resizepart 4 100% Warning: Partition /dev/vda4 is being used. Are you sure you want to continue? Yes/No? y
Information: You may need to update /etc/fstab.

lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS vda 252:0 0 20G 0 disk ├─vda2 252:2 0 200M 0 part /boot/efi ├─vda3 252:3 0 1G 0 part /boot └─vda4 252:4 0 8.8G 0 part /

Common modules with examples

uri: Interacts with basic http and https web services. (Verify connectivity to a web server +9)

Test httpd accessibility:

uri:
  url: http://ansible1

Show result of the command while running the playbook:

uri:
  url: http://ansible1
  return_content: yes

Show the status code that signifies the success of the request:

uri:
  url: http://ansible1
  status_code: 200

debug: Prints statements during execution. Used for debugging variables or expressions without stopping a playbook.

Print out the value of the ansible_facts variable:

debug:
  var: ansible_facts

Networking with Ansible

3 modules for managing the networking on nodes:

  • service
  • daemon
  • system settings

Setting up an Ansible Lab

Requirements for Ansible

  • Python 3 on control node and managed nodes
  • sudo ssh access to managed nodes
  • Ansible installed on the Control node

Lab Setup

For this lab, we will need three virtual machines using RHEL 9. 1 control node and 2 managed nodes. Use IP addresses based on your lab network environment:

Hostname pretty hostname ip addreess RAM Storage vCPUs
control.example.com control 192.168.124.200 2048MB 20G 2
ansible1.example.com ansible1 192.168.124.201 2048MB 20G 2
ansible2.example.com ansible2 192.168.124.202 2048MB 20G 2
I have set these VMs up in virt-manager, then cloned them so I can rebuild the lab later. You can automate this using Vagrant or Ansible but that will come later. Ignore the Win10 VM. It’s a necessary evil:

Setting hostnames and verifying dependencies

Set a hostname on all three machines:

[root@localhost ~]# hostnamectl set-hostname control.example.com
[root@localhost ~]# hostnamectl set-hostname --pretty control

Install Ansible on Control Node

[root@localhost ~]# dnf -y install ansible-core
...

Verify python3 is installed:

[root@localhost ~]# python --version
Python 3.9.18

Configure Ansible user and SSH

Add a user for Ansible. This can be any username you like, but we will use “ansible” as our lab user. Also, the ansible user needs sudo access. We will also make it so no password is required for convenience. You will need to do this on the control node and both managed nodes:

[root@control ~]# useradd ansible
[root@control ~]# visudo

Add this line to the file that comes up: ansible ALL=(ALL) NOPASSWD: ALL

Configure a password for the ansible user:

[root@control ~]# passwd ansible
Changing password for user ansible.
New password: 
BAD PASSWORD: The password is shorter than 8 characters
Retype new password: 
passwd: all authentication tokens updated successfully.

On the control node only: Add host names of the nodes to /etc/hosts:

echo "192.168.124.201 ansible1 >> /etc/hosts
> ^C
[root@control ~]# echo "192.168.124.201 ansible1" >> /etc/hosts
[root@control ~]# echo "192.168.124.202 ansible2" >> /etc/hosts
[root@control ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.124.201 ansible1
192.168.124.202 ansible2

Log in to the ansible user account for the remaining steps. Note, Ansible assumes passwordless (key-based) login for ssh. If you insist on using passwords, add the –ask-pass (-k) flag to your Ansible commands. (This may require sshpass package to work)

On the control node only: Generate an ssh key to send to the hosts for passwordless Login:

[ansible@control ~]$ ssh-keygen -N "" -q
Enter file in which to save the key (/home/ansible/.ssh/id_rsa): 

Copy the public key to the nodes and test passwordless access and test passwordless login to the managed nodes:

^C[ansible@control ~]$ ssh-copy-id ansible@ansible1
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/ansible/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
ansible@ansible1's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'ansible@ansible1'"
and check to make sure that only the key(s) you wanted were added.

[ansible@control ~]$ ssh-copy-id ansible@ansible2
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/ansible/.ssh/id_rsa.pub"
The authenticity of host 'ansible2 (192.168.124.202)' can't be established.
ED25519 key fingerprint is SHA256:r47sLc/WzVA4W4ifKk6w1gTnxB3Iim8K2K0KB82X9yo.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
ansible@ansible2's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'ansible@ansible2'"
and check to make sure that only the key(s) you wanted were added.

[ansible@control ~]$ ssh ansible1
Register this system with Red Hat Insights: insights-client --register
Create an account or view all your systems at https://red.ht/insights-dashboard
Last failed login: Thu Apr  3 05:34:20 MST 2025 from 192.168.124.200 on ssh:notty
There was 1 failed login attempt since the last successful login.
[ansible@ansible1 ~]$ 
logout
Connection to ansible1 closed.
[ansible@control ~]$ ssh ansible2
Register this system with Red Hat Insights: insights-client --register
Create an account or view all your systems at https://red.ht/insights-dashboard
[ansible@ansible2 ~]$ 
logout
Connection to ansible2 closed.

Install lab stuff from the RHCE guide: sudo dnf -y install git

[ansible@control base]$ cd

[ansible@control ~]$ git clone https://github.com/sandervanvugt/rhce8-book 
Cloning into 'rhce8-book'...
remote: Enumerating objects: 283, done.
remote: Counting objects: 100% (283/283), done.
remote: Compressing objects: 100% (233/233), done.
remote: Total 283 (delta 27), reused 278 (delta 24), pack-reused 0 (from 0)
Receiving objects: 100% (283/283), 62.79 KiB | 357.00 KiB/s, done.
Resolving deltas: 100% (27/27), done.

Variables and Facts

  • Using and working with variables
  • Ansible Facts
  • Using Vault
  • Capture command output using register

Three types of variables:

  • Fact
  • Variable
  • Magic Variable

Variables make Ansible really flexible. Especially when used in combination with conditionals. These are defined at the discretion of the user.:

---
- name: create a user using a variable
  hosts: ansible1
  vars:
    users: lisa <-- defaults value for this play
  tasks:
    - name: create a user {{ users }} on host {{ ansible_hostname }} <-- ansible fact variable
      user:
        name: "{{ users }}" <-- If value starts with variable, the whole line must have double quotes

An ansible fact variable is a variable that is automatically set based on the managed system. Facts are a default behavior used to discover information to use in conditionals. They are collected when Ansible executes on a remote system.

There are systems facts and custom facts. Systems facts are system property values. And custom facts are user-defined variables stored on managed hosts.

If no variables are defined at the command prompt, it will use the variable set for the play. You can also define the variables with the -e flag when running the playbook:

[ansible@control base]$ ansible-playbook variable-pb.yaml -e users=john

PLAY [create a user using a variable] ************************************************************************************************************************

TASK [Gathering Facts] ***************************************************************************************************************************************
ok: [ansible1]

TASK [create a user john on host ansible1] *******************************************************************************************************************
changed: [ansible1]

PLAY RECAP ***************************************************************************************************************************************************
ansible1                   : ok=2    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   

A magic variable is a system variable that is automatically set.

Notice the “Gathering Facts” task. when you run a playbook. This is an implicit tasks ran every time you run a playbook. This task grabs facts from managed hosts and stores them in the variable ansible_facts.

You can use the debug module to display variables like so:

---
- name: show facts
  hosts: all
  tasks:
  - name: show facts
    debug:
      var: ansible_facts <-- this module does require variables to be enclosed in curly brackets

This outputs a gigantic list of facts from our managed nodes.