Managing Processes and Tasks
-
Managing Boot Process with Ansible
-
Managing Services with Ansible
Managing Boot Process with Ansible
Managing Services with Ansible
Managing the boot process with Ansible is a bit disappointing because Ansible offers no specific modules to do so. As a result, you must use generic modules instead, like the file module to manage the systemd boot targets or the lineinfile module to manage the GRUB configuration. What Ansible does offer, however, is the reboot module, which enables you to reboot a host and pick up after the reboot at the exact same location. The next two sections describe how to do this.
Managing the default target that a host should start in is a common task on Ansible. However, the systemd module has no options to manage this setting, and no other option to manage it is available. For that reason, you must fall back to a generic option instead.
If you need to manage the default systemd target, a file with the name /etc/systemd/system/default.target has to exist as a symbolic link to the desired default target. See, for instance, Listing 14-5, where the output of the Linux ls -l command is used to show the current configuration.
Listing 14-5 Showing the Default Systemd Target
::: pre_1 [ansible@control rhce8-book]$ ls -l /etc/systemd/system/default.target lrwxrwxrwx. 1 root root 37 Mar 23 05:33 /etc/systemd/system/default.target -> /lib/systemd/system/multi-user.target :::
Because Ansible itself doesn’t have any module to specifically set the default.target, you must use a generic module. In theory, you could use either the command module or the file module, but because the file module is a more specific module to generate the symbolic link, you should use the file module. Listing 14-6 shows how to manage the boot target.
Listing 14-6 Managing the Default Boot Target
::: pre_1 — - name: set default boot target hosts: ansible2 tasks: - name: set boot target to graphical file: src: /usr/lib/systemd/system/graphical.target dest: /etc/systemd/system/default.target state: link :::
In some cases, a managed host needs to be rebooted while running a playbook. To do so, you can use the reboot module. This module uses several arguments to restart managed nodes. To verify the renewed availability of the managed hosts, you need to specify the test_command argument. This argument specifies an arbitrary command that Ansible should run successfully on the managed hosts after the reboot. The success of this command indicates that the rebooted host is available again.
Equally useful while using the reboot module are the arguments that relate to timeouts. The reboot module uses no fewer than four of them:
• connect_timeout: The maximum seconds to wait for a successful connection before trying again
• post_reboot_delay: The number of seconds to wait after the reboot command before trying to validate the managed host is available again
• pre_reboot_delay: The number of seconds to wait before actually issuing the reboot
• reboot_timeout: The maximum seconds to wait for the rebooted machine to respond to the test command
When the rebooted host is back, the current playbook continues its tasks. This scenario is shown in the example in Listing 14-7, where first all managed hosts are rebooted, and after a successful reboot is issued, the message “successfully rebooted” is shown. Listing 14-8 shows the result of running this playbook. In Exercise 14-2 you can practice rebooting hosts using the reboot module.
Listing 14-7 Rebooting Managed Hosts
::: pre_1 — - name: reboot all hosts hosts: all gather_facts: no tasks: - name: reboot hosts reboot: msg: reboot initiated by Ansible test_command: whoami - name: print message to show host is back debug: msg: successfully rebooted :::
Listing 14-8 Verifying the Success of the reboot Module
::: pre_1 [ansible@control rhce8-book]$ ansible-playbook listing147.yaml
PLAY [reboot all hosts] *************************************************************************************************
TASK [reboot hosts] *****************************************************************************************************
changed: [ansible2]
changed: [ansible1]
changed: [ansible3]
changed: [ansible4]
changed: [ansible5]
TASK [print message to show host is back] *******************************************************************************
ok: [ansible1] => {
"msg": "successfully rebooted"
}
ok: [ansible2] => {
"msg": "successfully rebooted"
}
ok: [ansible3] => {
"msg": "successfully rebooted"
}
ok: [ansible4] => {
"msg": "successfully rebooted"
}
ok: [ansible5] => {
"msg": "successfully rebooted"
}
PLAY RECAP **************************************************************************************************************
ansible1 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
ansible2 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
ansible3 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
ansible4 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
ansible5 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
:::
::: box Exercise 14-2 Managing Boot State
1. As a preparation for this playbook, so that it actually changes the default boot target on the managed host, use ansible ansible2 -m file -a “state=link src=/usr/lib/systemd/system/graphical.target dest=/etc/systemd/system/default.target”.
2. Use your editor to create the file exercise142.yaml and write the following playbook header:
---
- name: set default boot target and reboot
hosts: ansible2
tasks:3. Now you set the default boot target to multi-user.target. Add the following task to do so:
- name: set default boot target
file:
src: /usr/lib/systemd/system/multi-user.target
dest: /etc/systemd/system/default.target
state: link4. Complete the playbook to reboot the managed hosts by including the following tasks:
- name: reboot hosts
reboot:
msg: reboot initiated by Ansible
test_command: whoami
- name: print message to show host is back
debug:
msg: successfully rebooted5. Run the playbook by using ansible-playbook exercise142.yaml.
6. Test that the reboot was issued successfully by using ansible ansible2 -a “systemctl get-default”. :::
Services can be managed in many ways. You can manage systemd services, but Ansible also allows for management of tasks using Linux cron and at. Apart from that, you can use Ansible to manage the desired systemd target that a managed system should be started in, and it can reboot running machines. Table 14-2 gives an overview of the most significant modules for managing services.
Table 14-2 Modules Related to Service Management
Throughout this book you have used the service module a lot. This module enables you to manage services, regardless of the init system that is used, so it works with System-V init, with Upstart, as well as systemd. In many cases, you can use the service module for any service-related task.
If systemd specifics need to be addressed, you must use the systemd module instead of the service module. Such systemd-specific features include daemon_reload and mask. The daemon_reload feature forces the systemd daemon to reread its configuration files, which is useful after applying changes (or after editing the service files directory, without using the Linux systemctl command). The mask feature marks a systemd service in such a way that it cannot be started, not even by accident. Listing 14-1 shows an example where the systemd module is used to manage services.
Listing 14-1 Using systemd Module Features
::: pre_1 — - name: using systemd module to manage services hosts: ansible2 tasks: - name: enable service httpd and ensure it is not masked systemd: name: httpd enabled: yes masked: no daemon_reload: yes :::
Given the large amount of functionality that is available in systemd, the functions that are offered by the systemd services are a bit limited, and for many specific features, you must use generic modules such as file and command instead. An example is setting the default target, which is done by creating a symbolic link using the file module.
The cron module can be used to manage cron jobs. A Linux cron job is one that is periodically executed by the Linux crond daemon at a specific time. The cron module can manage jobs in different ways:
• Write the job directly to a user’s crontab
• Write the job to /etc/crontab or under the /etc/cron.d directory
• Pass the job to anacron so that it will be run once an hour, day, week, month, or year without specifically defining when exactly
If you are familiar with Linux cron, using the Ansible cron module is straightforward. Listing 14-2 shows an example that runs the fstrim command every day at 4:05 and at 19:05.
Listing 14-2 Running a cron Job
---
- name: run a cron job
hosts: ansible2
tasks:
- name: run a periodic job
cron:
name: "run fstrim"
minute: "5"
hour: "4,19"
job: "fstrim"
As a result of this playbook, a crontab file is created for user root. To create a crontab file for another user, you can use the user attribute. Notice that while managing cron jobs using the cron module, a name attribute is specified. This attribute is required for Ansible to manage the cron jobs and has no meaning for Linux crontab itself. If, for instance, you later want to remove a cron job, you must use the name of the job as an identifier.
Listing 14-3 shows a sample playbook that removes the job that was created in Listing 14-2. Notice that it just specifies state: absent as well as the name of the job that was previously created; no other parameters are required.
Listing 14-3 Removing a cron Job Using the name Attribute
::: pre_1 — - name: run a cron job hosts: ansible2 tasks: - name: run a periodic job cron: name: “run fstrim” state: absent :::
Whereas you use Linux cron to schedule tasks at a regular interval, you use Linux at to manage tasks that need to run once only. To interface with Linux at, the Ansible at module is provided. Table 14-3 gives an overview of the arguments it takes to specify how the task should be executed.
::: group Table 14-3 at Module Arguments Overview
The most important point to understand when working with at is that it is used to defined how far from now a task has to be executed. This is done using count and units. If, for example, you want to run a task five minutes from now, you specify the job with the arguments count: 5 and units: minutes. Also notice the use of the unique argument. If set to yes, the task is ignored if a similar job is scheduled to run already. Listing 14-4 shows an example.
Listing 14-4 Running Commands in the Future with at
::: pre_1 — - name: run an at task hosts: ansible2 tasks: - name: run command and write output to file at: command: “date > /tmp/my-at-file” count: 5 units: minutes unique: yes state: present :::
In Exercise 14-1 you practice your skills working with the cron module.
::: box Exercise 14-1 Managing cron Jobs
1. Use your editor to create the playbook exercise141-1.yaml and give it the following contents:
---
- name: run a cron job
hosts: ansible2
tasks:
- name: run a periodic job
cron:
name: "run logger"
minute: "0"
hour: "5"
job: "logger IT IS 5 AM"2. Use ansible-playbook exercise141-1.yaml to run the job.
3. Use the command ansible ansible2 -a “crontab -l” to verify the cron job has been added. The output should look as follows:
ansible2 | CHANGED | rc=0 >>
#Ansible: run logger
0 5 * * * logger IT IS 5 AM4. Create a new playbook with the name exercise141-2 that runs a new cron job but uses the same name:
---
- name: run a cron job
hosts: ansible2
tasks:
- name: run a periodic job
cron:
name: "run logger"
minute: "0"
hour: "6"
job: "logger IT IS 6 AM"5. Run this new playbook by using ansible-playbook exercise141-2.yaml. Notice that the job runs with a changed status.
6. Repeat the command ansible ansible2 -a “crontab -l”. This shows you that the new cron job has overwritten the old job because it was using the same name. Here is something important to remember: all cron jobs should have a unique name!
7. Write the playbook exercise141-3.yaml to remove the cron job that you just created:
---
- name: run a cron job
hosts: ansible2
tasks:
- name: run logger
cron:
name: "run logger"
state: absent8. Use ansible-playbook exercise141-3.yaml to run the last playbook. Next, use ansible ansible2 -a “crontab -l” to verify that the cron job was indeed removed.