Ansible – Task Control Lab

Task Controls Lab

Goals
  • Implement Ansible conditionals using the when statement
  • Use Ansible with_items loops in conjunction with conditionals
  • Define handlers in playbooks and notify them for configuration changes
  • Tag Ansible tasks
  • Filter tasks based on tags when running playbooks
  • Handle errors in playbooks

1. Construct Flow Control

In this exercise, you construct conditionals and loops in Ansible playbooks.

1.1. Set Up Environment

  1. Log in to workstation as student and run the lab setup script:
    [student@workstation ~]$ lab task-control-flowcontrol setup
    • The script creates the dev-flowcontrol working directory and populates it with an Ansible configuration file and host inventory.
  2. Change to the /home/student/dev-flowcontrol directory:
    [student@workstation ~]$ cd dev-flowcontrol
    [student@workstation dev-flowcontrol]$

1.2. Create Task File

  1. Create a task file named configure_database.yml.
  2. Add tasks to install the extra packages, update /etc/my.cnf from a copy stored on a website, and start mariadb on the managed hosts.
    1. Make sure the include file uses the variables you defined in the playbook.yml file and inventory.
    2. Make sure the get_url module sets force=yes so that the my.cnf file is updated even if it already exists on the managed host, and make sure it sets correct permissions as well as SELinux contexts on the /etc/my.cnf file.
        - yum:
            name: "{{ extra_packages }}"
      
        - get_url:
            url: "http://materials.example.com/task_control/my.cnf"
            dest: "{{ configure_database_path }}"
            owner: mysql
            group: mysql
            mode: 0644
            seuser: system_u
            setype: mysqld_etc_t
            force: yes
      
        - service:
            name: "{{ db_service }}"
            state: started
            enabled: true
  3. When you are finished, save the file and exit the editor.

1.3. Create Playbook

  1. In the same directory as the task file, create the playbook.yml playbook:
    1. Define a list variable, db_users, that consists of a list of two users, db_admin and db_user.
    2. Add a configure_database_path variable set to the /etc/my.cnf file.
    3. Create a task that uses a loop to create the users only if the managed host belongs to the databases host group.
      ---
      - hosts: all
        vars:
          db_package: mariadb-server
          db_service: mariadb
          db_users:
            - db_admin
            - db_user
          configure_database_path: /etc/my.cnf
      
        tasks:
        - name: Create the MariaDB users
          user:
            name: "{{ item }}"
          with_items: "{{ db_users }}"
          when: inventory_hostname in groups['databases']
  2. Add a task that uses the db_package variable to install the database software only if the variable has been defined:
      - name: Install the database server
        yum:
          name: "{{ db_package }}"
        when: db_package is defined
  3. Create a task to do basic database configuration:
    1. Ensure that the task runs only when configure_database_path is defined.
    2. Ensure that the task includes the configure_database.yml task file and defines a local array, extra_packages, which is used to specify additional packages needed for this configuration.
    3. Set that variable to include three packages: mariadb-benchmariadb-libs, and mariadb-test.
        - name: Configure the database software
          include: configure_database.yml
          vars:
            extra_packages:
              - mariadb-bench
              - mariadb-libs
              - mariadb-test
          when: configure_database_path is defined
  4. When you are done, save the playbook and exit the editor.

1.4. Run Playbook

  1. Check the final playbook.yml file before running it:
    ---
    - hosts: all
      vars:
        db_package: mariadb-server
        db_service: mariadb
        db_users:
          - db_admin
          - db_user
        configure_database_path: /etc/my.cnf
    
      tasks:
      - name: Create the MariaDB users
        user:
          name: "{{ item }}"
        with_items: "{{ db_users }}"
        when: inventory_hostname in groups['databases']
    
      - name: Install the database server
        yum:
          name: "{{ db_package }}"
        when: db_package is defined
    
      - name: Configure the database software
        include: configure_database.yml
        vars:
          extra_packages:
            - mariadb-bench
            - mariadb-libs
            - mariadb-test
        when: configure_database_path is defined
  2. Run the playbook to install and configure the database on the managed hosts:
    [student@workstation dev-flowcontrol]$ ansible-playbook playbook.yml
    PLAY ************************************************************************
    
    TASK [setup] ****************************************************************
    ok: [servera.lab.example.com]
    
    ... Output omitted ...
    
    TASK [Includes the configuration] *******************************************
    included: /home/student/dev-flowcontrol/configure_database.yml
              for servera.lab.example.com
    • The output confirms that the task file was successfully included and executed.

1.5. Verify Results

In this section, you manually verify that the necessary packages were installed on servera, that the /etc/my.cnf file is in place with the correct permissions, and that the two users were created.

  1. Use an ad hoc command from workstation to servera to confirm that the packages were installed:
    [student@workstation dev-flowcontrol]$ ansible all -a 'yum list installed mariadb-bench mariadb-libs mariadb-test'
    servera.lab.example.com | SUCCESS | rc=0 >>
    Loaded plugins: langpacks, search-disabled-repos
    Installed Packages
    mariadb-bench.x86_64                  1:5.5.44-2.el7                   @rhel_dvd
    mariadb-libs.x86_64                   1:5.5.44-2.el7                   installed
    mariadb-test.x86_64                   1:5.5.44-2.el7                   @rhel_dvd
  2. Confirm that the my.cnf file was successfully copied under /etc/:
    [student@workstation dev-flowcontrol]$ ansible all -a 'grep Ansible /etc/my.cnf'
    servera.lab.example.com | SUCCESS | rc=0 >>
    # Ansible file
  3. Confirm that the two users were created:
    [student@workstation dev-flowcontrol]$ ansible all -a 'id db_user'
    servera.lab.example.com | SUCCESS | rc=0 >>
    uid=1003(db_user) gid=1003(db_user) groups=1003(db_user)
    [student@workstation dev-flowcontrol]$ ansible all -a 'id db_admin'
    servera.lab.example.com | SUCCESS | rc=0 >>
    uid=1002(db_admin) gid=1002(db_admin) groups=1002(db_admin)

1.6. Evalute Your Progress

  1. Grade your work:
    [student@workstation ~]$ lab task-control-flowcontrol grade
  2. Correct any reported failures.
  3. Rerun the script until successful.

1.7. Clean Up

  1. Clean up the lab environment:
    [student@workstation ~]$ lab task-control-flowcontrol cleanup

2. Implement Handlers

In this exercise, you implement handlers in playbooks.

2.1. Set Up Environment

  1. Log in to workstation as student and run the lab setup script:
    [student@workstation ~]$ lab task-control-handlers setup
    • The script creates the dev-handlers project directory as well as the Ansible configuration file and the host inventory file.
  2. From workstation.lab.example.com, open a new terminal and change to the dev-handlersproject directory:
    [student@workstation ~]$ cd dev-handlers
    [student@workstation dev-handlers]$

2.2. Create configure_db.yml Playbook

In this section, you use a text editor to create the configure_db.yml playbook file. This file installs a database server and creates users. When the database server is installed, the playbook restarts the service.

  1. Start the playbook with the initialization of the following variables:
    • db_packages, which defines the name of the packages to install for the database service
    • db_service, which defines the name of the database service
    • src_file for the URL of the configuration file to install
    • dst_file for the location of the installed configuration file on the managed hosts
      ---
      - hosts: databases
        vars:
          db_packages:
            - mariadb-server
            - MySQL-python
          db_service: mariadb
          src_file: "http://materials.example.com/task_control/my.cnf.template"
          dst_file: /etc/my.cnf
  2. Define a task that uses the yum module to install the required database packages as defined by the db_packages variable and notify the start_service handler:
      tasks:
        - name: Install {{ db_packages }} package
          yum:
            name: "{{ item }}"
            state: latest
          with_items: "{{ db_packages }}"
          notify:
            - start_service
  3. Add a task to download my.cnf.template to /etc/my.cnf on the managed host, using the get_url module and add a condition that notifies the restart_service and set_passwordhandlers:
        - name: Download and install {{ dst_file }}
          get_url:
            url: "{{ src_file }}"
            dest: "{{ dst_file }}"
            owner: mysql
            group: mysql
            force: yes
          notify:
            - restart_service
            - set_password
  4. Define the start_service handler:
      handlers:
        - name: start_service
          service:
            name: "{{ db_service }}"
            state: started
    • This handler starts the mariadb service.
  5. Define the restart_service handler:
        - name: restart_service
          service:
            name: "{{ db_service }}"
            state: restarted
    • This handler restarts the mariadb service.
  6. Define the set_password handler:
        - name: set_password
          mysql_user:
            name: root
            password: redhat
    • This handler sets the administrative password using the mysql_user module to perform the command.
  7. Confirm that the completed playbook looks like this:
    ---
    - hosts: databases
      vars:
        db_packages:
          - mariadb-server
          - MySQL-python
        db_service: mariadb
        src_file: "http://materials.example.com/task_control/my.cnf.template"
        dst_file: /etc/my.cnf
    
      tasks:
        - name: Install {{ db_packages }} package
          yum:
            name: "{{ item }}"
            state: latest
          with_items: "{{ db_packages }}"
          notify:
            - start_service
        - name: Download and install {{ dst_file }}
          get_url:
            url: "{{ src_file }}"
            dest: "{{ dst_file }}"
            owner: mysql
            group: mysql
            force: yes
          notify:
            - restart_service
            - set_password
    
      handlers:
        - name: start_service
          service:
            name: "{{ db_service }}"
            state: started
    
        - name: restart_service
          service:
            name: "{{ db_service }}"
            state: restarted
    
        - name: set_password
          mysql_user:
            name: root
            password: redhat

2.3. Run Playbook

  1. Run the configure_db.yml playbook and watch the output to see the handlers being executed:
    [student@workstation dev-handlers]# ansible-playbook configure_db.yml
    
    PLAY ************************************************************************
    
    ... Output omitted ...
    
    RUNNING HANDLER [start_service] ************************************************
    changed: [servera.lab.example.com]
    
    RUNNING HANDLER [restart_service] **********************************************
    changed: [servera.lab.example.com]
    
    RUNNING HANDLER [set_password] *************************************************
    changed: [servera.lab.example.com]
  2. Run the playbook again and note that the handlers are skipped:
    [student@workstation dev-handlers]# ansible-playbook configure_db.yml
    
    PLAY ***********************************************************************
    
    ... Output omitted ...
    
    PLAY RECAP *****************************************************************
    servera.lab.example.com    : ok=3    changed=0    unreachable=0    failed=0
  3. Update the playbook to add a task after installing /etc/my.cnf that sets the MySQL admin password like the set_password handler:
        - name: Set the MySQL password
          mysql_user:
            name: root
            password: redhat
  4. Run the playbook again:
    [student@workstation dev-handlers]# ansible-playbook configure_db.yml
    
    TASK [Set the MySQL password] **************************************************
    fatal: [servera.lab.example.com]: FAILED! => {"changed": false, "failed": true, "msg": "unable to connect to database, check login_user and login_password are correct or /root/.my.cnf has the credentials. Exception message: (1045, \"Access denied for user 'root'@'localhost' (using password: NO)\")"}
    
    NO MORE HOSTS LEFT *************************************************************
            to retry, use: --limit @configure_db.retry
    
    PLAY RECAP *********************************************************************
    servera.lab.example.com    : ok=3    changed=0    unreachable=0    failed=1
    • The task fails because the MySQL password has already been set.
    • This shows you why using a handler in this situation is better than a simple task.

2.4. Evaluate Your Progress

  1. Grade your work:
    [student@workstation ~]$ lab task-control-handlers grade
  2. Correct any reported failures.
  3. Rerun the script until successful.

2.5. Clean Up

  1. Clean up the lab environment:
    [student@workstation ~]$ lab task-control-handlers cleanup

3. Implement Tags

In this exercise, you implement tags in a playbook and run the playbook.

3.1. Set Up Environment

  1. Log in to workstation as student and run the lab setup script:
    [student@workstation ~]$ lab task-control-tags setup
    • The script creates the dev-tags working directory and populates it with an Ansible configuration file and host inventory.
  2. Change to the dev-tags project directory:
    [student@workstation ~]$ cd dev-tags
    [student@workstation dev-tags]$

3.2. Create Task File

  1. Create the configure_mail.yml task file.
    • The task file contains instructions to install the required packages and retrieve the configuration files for the mail server.
  2. Create a task that uses the yum module to install the postfix package:
    1. Notify the start_postfix handler.
    2. Tag the task as server using the tags keyword.
      ---
      - name: Install postfix
        yum:
          name: postfix
          state: latest
        tags:
          - server
        notify:
          - start_postfix
  3. Add a task that installs the dovecot package using the yum module:
    1. Notify the start_dovecot handler.
    2. Tag the task as client.
      - name: Install dovecot
        yum:
          name: dovecot
          state: latest
        tags:
          - client
        notify:
          - start_dovecot
  4. Add a task that uses the get_url module to retrieve the Postfix configuration file:
    1. Notify the restart_postfix handler.
    2. Tag the task as server.
      - name: Download main.cf configuration
        get_url:
          url: http://materials.example.com/task_control/main.cf
          dest: /etc/postfix/main.cf
        tags:
          - server
        notify:
          - restart_postfix
  5. Confirm that the completed task file looks like this:
    ---
    - name: Install postfix
      yum:
        name: postfix
        state: latest
      tags:
        - server
      notify:
        - start_postfix
    
    - name: Install dovecot
      yum:
        name: dovecot
        state: latest
      tags:
        - client
      notify:
        - start_dovecot
    
    - name: Download main.cf configuration
      get_url:
        url: http://materials.example.com/task_control/main.cf
        dest: /etc/postfix/main.cf
      tags:
        - server
      notify:
        - restart_postfix

3.3. Create Playbook

  1. Create a playbook file named playbook.yml and define it for all hosts:
    ---
    - hosts: all
  2. Define a task that uses the include module to include the configure_mail.yml task file and add a condition to run the task only for the hosts in the mailservers group:
      tasks:
       - name: Include configure_mail.yml
         include:
           configure_mail.yml
         when: inventory_hostname in groups['mailservers']
  3. Define the start_postfix handler to start the mail server:
      handlers:
        - name: start_postfix
          service:
            name: postfix
            state: started
  4. Define the start_dovecot handler to start the mail client:
        - name: start_dovecot
          service:
            name: dovecot
            state: started
  5. Define the restart_postfix handler to restart the mail server:
        - name: restart_postfix
          service:
            name: postfix
            state: restarted
  6. Confirm that the completed playbook looks like this:
    ---
    - hosts: all
    
      tasks:
       - name: Include configure_mail.yml
         include:
           configure_mail.yml
         when: inventory_hostname in groups['mailservers']
    
      handlers:
        - name: start_postfix
          service:
            name: postfix
            state: started
    
        - name: start_dovecot
          service:
            name: dovecot
            state: started
    
        - name: restart_postfix
          service:
            name: postfix
            state: restarted

3.4. Run Playbook

  1. Run the playbook using the --tags option to apply only the tasks tagged as server:
    [student@workstation dev-tags]$ ansible-playbook playbook.yml --tags 'server'
    ... Output omitted ...
    RUNNING HANDLER [start_postfix] ************************************************
    changed: [servera.lab.example.com]
    ... Output omitted ...
    • Note that the start_postfix handler is the only one triggered.
  2. Run an ad hoc command to make sure that the postfix package was successfully installed:
    [student@workstation dev-tags]$ ansible mailservers -a 'yum list installed postfix'
    servera.lab.example.com | SUCCESS | rc=0 >>
    Loaded plugins: langpacks, search-disabled-repos
    Installed Packages
    postfix.x86_64                     2:2.10.1-6.el7                      @rhel_dvd
  3. Run the playbook again, but this time skip the tasks tagged with the server tag:
    [student@workstation dev-tags]$ ansible-playbook playbook.yml --skip-tags 'server'
    ... Output omitted ...
    TASK [Install dovecot] *********************************************************
    changed: [servera.lab.example.com]
    
    RUNNING HANDLER [start_dovecot] ************************************************
    changed: [servera.lab.example.com]
    ... Output omitted ...
    • The play installs the dovecot package because the task is tagged with the client tag, and it triggers the start_dovecot handler.
  4. Run an ad hoc command to ensure that the dovecot package was successfully installed:
    [student@workstation dev-tags]$ ansible mailservers -a 'yum list installed dovecot'
    servera.lab.example.com | SUCCESS | rc=0 >>
    Loaded plugins: langpacks, search-disabled-repos
    Installed Packages
    dovecot.x86_64                     1:2.2.10-5.el7                      @rhel_dvd

3.5. Evaluate Your Progress

  1. Grade your work:
    [student@workstation ~]$ lab task-control-tags grade
  2. Correct any reported failures.
  3. Rerun the script until successful.

3.6. Clean Up

  1. Clean up the lab environment:
    [student@workstation ~]$ lab task-control-tags cleanup

4. Handle Errors

In this exercise, you learn how to handle errors in Ansible playbooks using various features.

4.1. Set Up Environment

  1. Log in to workstation as student and run the lab setup script:
    [student@workstation ~]$ lab task-control-failures setup
    • The script creates the dev-failures working directory.
  2. Change to the dev-failures project directory.
    [student@workstation ~]$ cd dev-failures
    [student@workstation dev-failures]$

4.2. Ignore Failed Commands

In this section, you learn how to ignore failed commands during the execution of playbooks.

The lab script created an Ansible configuration file and an inventory file that contains the servera.lab.example.com server in the databases group.

  1. Review the Ansible configuration file.
  2. Create the playbook.yml playbook:
    1. Initialize the three variables that are used to install the required packages and start the server:
      • web_package with a value of http
      • db_package with a value of mariadb-server
      • db_service with a value of mariadb
        ---
        - hosts: databases
          vars:
            web_package: http
            db_package: mariadb-server
            db_service: mariadb
      • The http value is an intentional error in the package name.
  3. Define two tasks that install the required packages:
      tasks:
        - name: Install {{ web_package }} package
          yum:
            name: "{{ web_package }}"
            state: latest
    
        - name: Install {{ db_package }} package
          yum:
            name: "{{ db_package }}"
            state: latest
  4. Run the playbook and watch the output of the play:
    [student@workstation dev-failures]$ ansible-playbook playbook.yml
    ... Output omitted ...
    TASK [Install http package] ****************************************************
    fatal: [servera.lab.example.com]: FAILED! => {"changed": false, "failed": true,
     "msg": "No Package matching 'http' found available, installed or updated",
     "rc": 0, "results": []}
    ... Output omitted ...
    • The first task failed because there is no existing package called http. And because the first task failed, the second task was skipped.
  5. Update the first task to ignore any errors by adding the ignore_errors keyword:
      tasks:
        - name: Install {{ web_package }} package
          yum:
            name: "{{ web_package }}"
            state: latest
          ignore_errors: yes
    
        - name: Install {{ db_package }} package
          yum:
            name: "{{ db_package }}"
            state: latest
  6. Run the playbook again and watch the output of the play:
    [student@workstation dev-failures]$ ansible-playbook playbook.yml
    ... Output omitted ...
    TASK [Install http package] ****************************************************
    fatal: [servera.lab.example.com]: FAILED! => {"changed": false, "failed": true,
     "msg": "No Package matching 'http' found available, installed or updated",
     "rc": 0, "results": []}
    ...ignoring
    
    TASK [Install mariadb-server package] ******************************************
    ok: [servera.lab.example.com]
    ... Output omitted ...
    • Even though the first task failed, Ansible executed the second task.

4.3. Override Task Failures

In this section, you continue working with the playbook you created in the previous section. You insert a new task at the beginning of the playbook that executes a remote command and captures the output. The output of the command is used by the task that installs the mariadb-server package to override what Ansible considers to be a failure.

  1. Insert a task at the beginning of the playbook that executes a remote command and saves the output in the command_result variable and include the ignore_errors keyword so that the play continues even if the task fails:
        - name: Check {{ web_package }} installation status
          command: yum list installed "{{ web_package }}"
          register: command_result
          ignore_errors: yes
  2. Run the playbook to ensure that the first two tasks are skipped:
    [student@workstation dev-failures]$ ansible-playbook playbook.yml
    ... Output omitted ...
    TASK [Check http installation status] ******************************************
    fatal: [servera.lab.example.com]: FAILED! => {"changed": true, "cmd":
     ["yum", "list", "installed", "http"], "delta": "0:00:00.269811", "end":
     "2016-05-18 07:10:18.446872", "failed": true, "rc": 1, "start": "2016-05-18
     07:10:18.177061", "stderr": "Error: No matching Packages to list", "stdout":
     "Loaded plugins: langpacks, search-disabled-repos", "stdout_lines": ["Loaded
     plugins: langpacks, search-disabled-repos"], "warnings": ["Consider using yum
     module rather than running yum"]}
    ...ignoring
    
    TASK [Install http package] ****************************************************
    fatal: [servera.lab.example.com]: FAILED! => {"changed": false, "failed": true,
     "msg": "No Package matching 'http' found available, installed or updated",
     "rc": 0, "results": []}
    ...ignoring
    ... Output omitted ...
  3. Add a condition to the task that installs the mariadb-server package that causes the task to be considered as failed if the keyword Error is present in the command_result variable:
        - name: Install {{ db_package }} package
          yum:
            name: "{{ db_package }}"
            state: latest
          when: "'Error' in command_result.stdout"
  4. Run an ad hoc command to remove the mariadb-server package from the databasesmanaged host:
    [student@workstation dev-failures]$ ansible databases -a 'yum -y remove mariadb-server'
    servera.lab.example.com | SUCCESS | rc=0 >>
    ...  Output omitted ...
    Removed:
      mariadb-server.x86_64 1:5.5.44-2.el7
    ... Output omitted ...
  5. Run the playbook and note that the latest task is skipped:
    [student@workstation dev-failures]$ ansible-playbook playbook.yml
    ... Output omitted ...
    TASK [Install mariadb-server package] ******************************************
    skipping: [servera.lab.example.com]
    ... Output omitted ...

4.4. Override changed State for Tasks

In this section, you update the task that installs the mariadb-server package by overriding the condition that triggers the changed state using the return code saved in the command_result.rcvariable.

  1. Update the last task by commenting out the when condition and adding a changed_whencondition in order to override the changed state for the task:
        - name: Install {{ db_package }} package
          yum:
            name: "{{ db_package }}"
            state: latest
          # when: "'Error' in command_result.stdout"
          changed_when: "command_result.rc == 1"
    • The condition uses the return code contained in the registered variable.
  2. Execute the playbook twice:
    The first execution reinstalls the mariadb-server package. Its output is not shown here.
    ... Output omitted ...
    TASK [Install mariadb-server package] ******************************************
    changed: [servera.lab.example.com]
    
    PLAY RECAP *********************************************************************
    servera.lab.example.com    : ok=4    changed=1    unreachable=0    failed=0
    • This output is the result of the second execution, which shows the task as changed despite the fact that the mariadb-server package was already installed.

4.5. Implement Blocks/Rescue/Always in Playbooks

  1. Update the playbook by nesting the first two tasks in a block clause and remove the lines that use the ignore_errors conditional:
        - block:
          - name: Check {{ web_package }} installation status
            command: yum list installed "{{ web_package }}"
            register: command_result
    
          - name: Install {{ web_package }} package
            yum:
              name: "{{ web_package }}"
              state: latest
  2. Nest the task that installs the mariadb-server package in a rescue clause and remove the conditional that overrides the changed result:
          rescue:
            - name: Install {{ db_package }} package
              yum:
                name: "{{ db_package }}"
                state: latest
    • This causes the task to execute even if the previous tasks fail.
  3. Finally, add an always clause that starts the database server upon installation using the servicemodule:
          always:
            - name: Start {{ db_service }} service
              service:
                name: "{{ db_service }}"
                state: started
  4. Confirm that the tasks section looks like this:
      tasks:
        - block:
          - name: Check {{ web_package }} installation status
            command: yum list installed "{{ web_package }}"
            register: command_result
    
          - name: Install {{ web_package }} package
            yum:
              name: "{{ web_package }}"
              state: latest
    
          rescue:
            - name: Install {{ db_package }} package
              yum:
                name: "{{ db_package }}"
                state: latest
    
          always:
            - name: Start {{ db_service }} service
              service:
                name: "{{ db_service }}"
                state: started
  5. Remove the mariadb-server package from the databases managed host:
    [student@workstation dev-failures]$ ansible databases -a 'yum -y remove mariadb-server'
  6. Run the playbook, and watch as Ansible installs the mariadb-server package and starts the mariadb service even though the first two tasks failed:
    [student@workstation dev-failures]$ ansible-playbook playbook.yml
    ... Output omitted ...
    TASK [Install mariadb-server package] ******************************************
    changed: [servera.lab.example.com]
    
    TASK [Start mariadb service ] **************************************************
    changed: [servera.lab.example.com]
    ... Output omitted ...

4.6. Evaluate Your Progress

  1. Grade your work:
    [student@workstation ~]$ lab task-control-failures grade
  2. Correct any reported failures.
  3. Rerun the script until successful.

4.7. Clean Up

  1. Clean up the lab environment:
    [student@workstation ~]$ lab task-control-failures cleanup

5. Implement Task Control

In this exercise, you install the Apache web server and secure it using mod_ssl. You use various Ansible conditionals to deploy the environment.

5.1. Set Up Environment

  1. Log in to workstation as student and run the lab setup script:
    [student@workstation ~]$ lab task-control setup
    • This setup script ensures that the serverb managed host is reachable on the network and that the correct Ansible configuration file and inventory are installed on the control node.
  2. From workstation.lab.example.com, change to the lab-task-control project directory:
    [student@workstation ~]$ cd lab-task-control
    [student@workstation lab-task-control]$

5.2. Define Web Server Tasks

In this section, you create the install_packages.yml task file and define a task that installs the latest versions of the httpd and mod_ssl packages. To install the two packages, you use the web_package and ssl_package variables. (You define the variables later when you create the main playbook.) You use a loop for installing the packages and set a condition that the packages are installed only if (1) the server belongs to the webservers group and (2) the available memory on the system is greater than the amount of memory defined by the memory variable. This variable is set upon import of the task and uses an Ansible fact to determine the available memory on the managed host. Finally, you add a task that starts the service defined by the web_service variable.

  1. In the top-level directory for this exercise, create the install_packages.yml task file and the first task:
    ---
    - name: Installs the required packages
      yum:
        name: "{{ item }}"
      with_items:
        - "{{ web_package }}"
        - "{{ ssl_package }}"
  2. Add a when clause to install the packages:
    1. Ensure that installation occurs only if the managed host is in the webservers group and if the amount of memory on the managed host is greater than the amount the memory variable defines.
    2. Use the ansible_memory_mb.real.total for the amount of system memory.
        when:
          - inventory_hostname in groups["webservers"]
          - "{{ ansible_memory_mb.real.total }} > {{ memory }}"
  3. Add the task that starts the service defined by the web_service variable:
    - name: Starts the service
      service:
        name: "{{ web_service }}"
        state: started
  4. Confirm that the completed file looks like this:
    ---
    - name: Installs the required packages
      yum:
        name: "{{ item }}"
      with_items:
        - "{{ web_package }}"
        - "{{ ssl_package }}"
      when:
        - inventory_hostname in groups["webservers"]
        - "{{ ansible_memory_mb.real.total }} > {{ memory }}"
    
    - name: Starts the service
      service:
        name: "{{ web_service }}"
        state: started

5.3. Define Tasks to Configure Web Server

In this section, you create the configure_web.yml task file and add a task to it that checks whether or not the httpd package is installed. The output is registered in a variable. You then update the condition to consider the task as failed based on the return code of the command (the return code is 1when a package is not installed).

Next, you create a block that executes only if the httpd package is installed (using the return code that was captured in the first task). The block needs to start with a task that retrieves the file that the https_uri variable defines (the variable is set in the main playbook) and copies it to serverb.lab.example.com in the /etc/httpd/conf.d/ directory.

Finally, you define the following tasks:

  • A task that creates the ssl directory, which stores SSL certificates, under /etc/httpd/conf.d/on the managed host with a mode of 0755.
  • A task that creates the logs directory, which stores SSL logs, under /var/www/html/ on the managed host with a mode of 0755.
  • A task that uses the stat module to ensure that the /etc/httpd/conf.d/ssl.conf file exists, and captures the output in a variable.
  • A task that renames the /etc/httpd/conf.d/ssl.conf file as /etc/httpd/conf.d/ssl.conf.bak only if the file exists (based on the captured output from the previous task).
  • A task that retrieves the SSL certificates file that the ssl_uri variable defines (the variable is set in the main playbook), extracts it under the /etc/httpd/conf.d/ssl/ directory, and notifies the restart_services handler.
  • A task that creates the index.html file under the /var/www/html/ directory that uses Ansible facts and has the following content:
    serverb.lab.example.com (172.25.250.11) has been customized by Ansible

Follow these steps to create and populate the configure_web.yml task file:

  1. In the top-level directory of this exercise, create the configure_web.yml task file, beginning with a task that uses the shell module to determine whether or not the httpd package is installed:
    ---
    - shell:
        rpm -q httpd
      register: rpm_check
      failed_when: rpm_check.rc == 1
    • The failed_when variable uses the return code to override how Ansible determines that the task has failed.
  2. Create a block that contains the tasks for configuring the files:
    1. Start with a task that uses the get_url module to retrieve the Apache SSL configuration file.
    2. Use the https_uri variable for the url.
    3. Use /etc/httpd/conf.d/ for the remote path on the managed host.
      - block:
        - get_url:
            url: "{{ https_uri }}"
            dest: /etc/httpd/conf.d/
      
  3. Create the /etc/httpd/conf.d/ssl remote directory with a mode of 0755:
      - file:
          path: /etc/httpd/conf.d/ssl
          state: directory
          mode: 0755
    
  4. Create the /var/www/html/logs remote directory with a mode of 0755:
      - file:
          path: /var/www/html/logs
          state: directory
          mode: 0755
    
  5. Confirm that the /etc/httpd/conf.d/ssl.conf file exists and capture the output in the ssl_file variable using the register statement:
      - stat:
          path: /etc/httpd/conf.d/ssl.conf
        register: ssl_file
  6. Create the task that renames the /etc/httpd/conf.d/ssl.conf file as /etc/httpd/conf.d/ssl.conf.bak after evaluating the content of the ssl_file variable:
      - shell:
          mv /etc/httpd/conf.d/ssl.conf /etc/httpd/conf.d/ssl.conf.bak
        when: ssl_file.stat.exists
  7. Create the task that uses the unarchive module to retrieve the remote SSL configuration files:
    1. Use the ssl_uri variable for the source and /etc/httpd/conf.d/ssl/ as the destination.
    2. Instruct the task to notify the restart_services handler when the file has been copied.
        - unarchive:
            src: "{{ ssl_uri }}"
            dest: /etc/httpd/conf.d/ssl/
            copy: no
          notify:
            - restart_services
  8. Add the task that creates the index.html file under /var/www/html/ on the managed host with the following content:
    severb.lab.example.com (172.25.250.11) has been customized by Ansible
    1. Use the ansible_fqdn and ansible_default_ipv4.address facts to create the page:
        - copy:
            content: "{{ ansible_fqdn }} ({{ ansible_default_ipv4.address }}) has been customized by Ansible\n"
            dest: /var/www/html/index.html
      
  9. Finally, make sure the block only runs if the httpd package is installed by adding a when clause that parses the return code contained in the rpm_check registered variable:
      when:
        rpm_check.rc == 0
  10. Confirm that the completed file looks like this:
    ---
    - shell:
        rpm -q httpd
      register: rpm_check
      failed_when: rpm_check.rc == 1
    
    - block:
      - get_url:
          url: "{{ https_uri }}"
          dest: /etc/httpd/conf.d/
    
      - file:
          path: /etc/httpd/conf.d/ssl
          state: directory
          mode: 0755
    
      - file:
          path: /var/www/html/logs
          state: directory
          mode: 0755
    
      - stat:
          path: /etc/httpd/conf.d/ssl.conf
        register: ssl_file
    
      - shell:
          mv /etc/httpd/conf.d/ssl.conf /etc/httpd/conf.d/ssl.conf.bak
        when: ssl_file.stat.exists
    
      - unarchive:
          src: "{{ ssl_uri }}"
          dest: /etc/httpd/conf.d/ssl/
          copy: no
        notify:
          - restart_services
    
      - copy:
          content: "{{ ansible_fqdn }} ({{ ansible_default_ipv4.address }}) has been customized by Ansible\n"
          dest: /var/www/html/index.html
    
      when:
        rpm_check.rc == 0

5.4. Define Firewall Tasks

In this section, you create the configure_firewall.yml task file, starting with a task that installs the package that the fw_package variable defines (the variable is set in the main playbook). You create a task that starts the service specified by the fw_service variable and a task that adds firewall rules for the http and https services using a loop. The rules need to be applied immediately and persistently. Tag all tasks with the production tag.

  1. Create the configure_firewall.yml file and define the task that uses the yum module to install the latest version of the firewall service:
    ---
    - yum:
        name: "{{ fw_package }}"
        state: latest
      tags: production
  2. Add the task that starts the firewall service using the fw_service variable:
    - service:
        name: "{{ fw_service }}"
        state: started
      tags: production
  3. Add the task that uses the firewalld module to add the http and https service rules to the firewall and use a loop to make sure the rules are applied immediately and persistently:
    - firewalld:
        service: "{{ item }}"
        immediate: true
        permanent: true
        state: enabled
      with_items:
        - http
        - https
      tags: production
  4. Confirm that the completed file looks like this:
    ---
    - yum:
        name: "{{ fw_package }}"
        state: latest
      tags: production
    
    - service:
        name: "{{ fw_service }}"
        state: started
      tags: production
    
    - firewalld:
        service: "{{ item }}"
        immediate: true
        permanent: true
        state: enabled
      with_items:
        - http
        - https
      tags: production

5.5. Define Main Playbook

In this exercise, you create the playbook.yml playbook and target the hosts in the webserversgroup. You define a block that imports the three task files you just created.

For the task that imports the install_packages.yml playbook, you define the following variables:

  • memory with a value of 256
  • web_package with a value of httpd
  • ssl_package with a value of mod_ssl

For the task that imports the configure_web.yml file, you define the following variables:

For the task that imports the configure_firewall.yml playbook, you add a condition to only import the tasks tagged with the production tag and define the fw_package and fw_service variables with a value of firewalld.

In the rescue clause for the block, you define a task to install the httpd service and notify the restart_services handler to start the service upon its installation. You add an always statement that uses the shell module to query the status of the httpd service using systemctl and define the restart_services handler to restart both the httpd and firewalld services using a loop.

  1. Create the playbook.yml playbook and start by targeting the hosts in the webservers host group:
    ---
    - hosts: webservers
    
  2. Create a block for importing the three task files using the include statement:
    1. For the first include, use install_packages.yml as the name of the file to import.
    2. Define the four variables required by the file:
      • memory, with a value of 256
      • web_package, with a value of httpd
      • ssl_package, with a value of mod_ssl
      • web_service, with a value of httpd
          tasks:
            - block:
              - include: install_packages.yml
                vars:
                  memory: 256
                  web_package: httpd
                  ssl_package: mod_ssl
                  web_service: httpd
  3. For the second include, use configure_web.yml as the name of the file to import:
    1. Define the two variables required by the file:
  4. For the third include, use configure_firewall.yml as the name of the file to import:
    1. Define the variables required by the file:
      • fw_package, with a value of firewalld
      • fw_service, with a value of firewalld
    2. Import only the tasks that are tagged with production.
            - include: configure_firewall.yml
              vars:
                fw_package: firewalld
                fw_service: firewalld
              tags: production
  5. Create the rescue clause for the block:
    1. Make sure that it installs the latest version of the httpd package.
    2. Have it notify the restart_services handler upon the package installation.
    3. Add a debug statement with Failed to import and run all the tasks; installing the web server manually as the message.
            rescue:
            - yum:
                name: httpd
                state: latest
              notify:
                - restart_services
      
            - debug:
                msg: "Failed to import and run all the tasks; installing the web server manually"
  6. Create an always clause that uses the shell module to query the status of the httpd service using systemctl:
          always:
          - shell:
              cmd: "systemctl status httpd"
    
  7. Define the restart_services handler that uses a loop to restart both the firewalld and httpd services:
      handlers:
        - name: restart_services
          service:
            name: "{{ item }}"
            state: restarted
          with_items:
            - httpd
            - firewalld
  8. Confirm that the completed playbook looks like this:
    ---
    - hosts: webservers
      tasks:
        - block:
          - include: install_packages.yml
            vars:
              memory: 256
              web_package: httpd
              ssl_package: mod_ssl
              web_service: httpd
          - include: configure_web.yml
            vars:
              https_uri: http://materials.example.com/task_control/https.conf
              ssl_uri: http://materials.example.com/task_control/ssl.tar.gz
          - include: configure_firewall.yml
            vars:
              fw_package: firewalld
              fw_service: firewalld
            tags: production
    
          rescue:
          - yum:
              name: httpd
              state: latest
            notify:
              - restart_services
    
          - debug:
              msg: "Failed to import and run all the tasks; installing the web server manually"
    
          always:
          - shell:
              cmd: "systemctl status httpd"
    
      handlers:
        - name: restart_services
          service:
            name: "{{ item }}"
            state: restarted
          with_items:
            - httpd
            - firewalld

5.6. Execute Playbook

In this section, you run the playbook.yml playbook to set up the environment. You ensure that the web server has been correctly configured by querying the home page of the web server using the curl command with the -k option to allow insecure connections.

  1. Run the playbook.yml playbook:
    [student@workstation lab-task-control]$ ansible-playbook playbook.yml
    PLAY ***************************************************************************
    
    TASK [setup] *******************************************************************
    ok: [serverb.lab.example.com]
    ...
    RUNNING HANDLER [restart_services] *********************************************
    changed: [serverb.lab.example.com] => (item=httpd)
    changed: [serverb.lab.example.com] => (item=firewalld)
    
    PLAY RECAP *********************************************************************
    serverb.lab.example.com    : ok=19   changed=13   unreachable=0    failed=0
  2. Review the output to confirm that the playbook does the following:
    • Imports and runs the tasks that install the web server packages only if there is enough memory on the managed host.
    • Imports and runs the tasks that configure SSL for the web server.
    • Imports and runs the tasks that create the firewall rule for the web server to be reachable.
  3. Confirm that the web page is available:
    [student@workstation lab-task-control]$ curl -k https://serverb.lab.example.com
    serverb.lab.example.com (172.25.250.11) has been customized by Ansible
    • The -k option allows you to bypass any SSL strict checking.

5.7. Evaluate Your Progress

  1. Grade your work:
    [student@workstation ~]$ lab task-control grade
  2. Correct any reported failures.
  3. Rerun the script until successful.

5.8. Clean Up

  1. Clean up the lab environment:
    [student@workstation ~]$ lab task-control cleanup
Advertisements

Ansible – Variables Lab

Variables and Inclusions Lab

Goals
  • Define variables in a playbook and create tasks that include defined variables
  • Gather facts from a host and create tasks that use the gathered facts
  • Define variables and tasks in separate files and use the files in playbooks

1. Manage Variables

In this exercise, you define and use variables in a playbook. You create a playbook that installs the Apache web server and opens the ports for the service to be reachable. The playbook queries the web server to ensure that it is up and running.

1.1. Set Up Environment

  1. Log in to workstation as student and run the lab setup script:
    [student@workstation ~]$ lab manage-variables-playbooks setup
    • The script creates the dev-vars-playbook working directory and populates it with an Ansible configuration file and host inventory.
  2. Change to the ~/dev-vars-playbook directory:
    [student@workstation ~]$ cd dev-vars-playbook
    [student@workstation dev-vars-playbook]$

1.2. Create Playbook

  1. First, create the playbook.yml playbook and define the following variables in the varssection:
    • web_pkg: Defines the name of the package to install for the web server
    • firewall_pkg: Defines the name of the firewall package
    • web_service: Defines the name of the web service to manage
    • firewall_service: Defines the name of the firewall service to manage
    • python_pkg: Defines a package to be installed for the uri module
    • rule: Defines the service to open
      ---
      - name: Install Apache and start the service
        hosts: webserver
        vars:
          web_pkg: httpd
          firewall_pkg: firewalld
          web_service: httpd
          firewall_service: firewalld
          python_pkg: python-httplib2
          rule: http
  2. Create the tasks block and the first task, which uses the yum module to install the required packages:
      tasks:
        - name: Install the required packages
          yum:
            name:
              - "{{ web_pkg  }}"
              - "{{ firewall_pkg }}"
              - "{{ python_pkg }}"
            state: latest
  3. Create two tasks to start and enable the httpd and firewalld services:
        - name: Start and enable the {{ firewall_service }} service
          service:
            name: "{{ firewall_service }}"
            enabled: true
            state: started
    
        - name: Start and enable the {{ web_service }} service
          service:
            name: "{{ web_service }}"
            enabled: true
            state: started
  4. Add a task that creates content in /var/www/html/index.html:
        - name: Create web content to be served
          copy:
            content: "Example web content"
            dest: /var/www/html/index.html
  5. Add a task that uses the firewalld module to add a rule for the web service:
        - name: Open the port for {{ rule }}
          firewalld:
            service: "{{ rule }}"
            permanent: true
            immediate: true
            state: enabled
  6. Create a new play that queries the web service to ensure that everything is configured correctly:
    1. Make sure it runs on localhost.
    2. Use the uri module to check a URL.
    3. For this task, check for a status code of 200 to confirm that the server is running and configured properly.
      - name: Verify the Apache service
        hosts: localhost
        tasks:
          - name: Ensure the webserver is reachable
            uri:
              url: http://servera.lab.example.com
              status_code: 200

1.3. Check and Run Playbook

  1. Confirm that the playbook appears as follows and that all tasks are defined:
    ---
    - name: Install Apache and start the service
      hosts: webserver
      vars:
        web_pkg: httpd
        firewall_pkg: firewalld
        web_service: httpd
        firewall_service: firewalld
        python_pkg: python-httplib2
        rule: http
    
      tasks:
        - name: Install the required packages
          yum:
            name:
              - "{{ web_pkg  }}"
              - "{{ firewall_pkg }}"
              - "{{ python_pkg }}"
            state: latest
    
        - name: Start and enable the {{ firewall_service }} service
          service:
            name: "{{ firewall_service }}"
            enabled: true
            state: started
    
        - name: Start and enable the {{ web_service }} service
          service:
            name: "{{ web_service }}"
            enabled: true
            state: started
    
        - name: Create web content to be served
          copy:
            content: "Example web content"
            dest: /var/www/html/index.html
    
        - name: Open the port for {{ rule }}
          firewalld:
            service: "{{ rule }}"
            permanent: true
            immediate: true
            state: enabled
    
    - name: Verify the Apache service
      hosts: localhost
      tasks:
        - name: Ensure the webserver is reachable
          uri:
            url: http://servera.lab.example.com
            status_code: 200
  2. Run the playbook and watch the output:
    [student@workstation dev-vars-playbook]$ ansible-playbook playbook.yml
    
    PLAY [Install Apache and start the service] ************************************
    
    TASK [setup] *******************************************************************
    ok: [servera.lab.example.com]
    
    TASK [Install the required packages] *******************************************
    changed: [servera.lab.example.com]
    
    TASK [Start and enable the firewalld service] **********************************
    changed: [servera.lab.example.com]
    
    TASK [Start and enable the httpd service] **************************************
    changed: [servera.lab.example.com]
    
    TASK [Create web content to be served] *****************************************
    changed: [servera.lab.example.com]
    
    TASK [Open the port for http] **************************************************
    changed: [servera.lab.example.com]
    
    PLAY [Verify the Apache service] ***********************************************
    
    TASK [setup] *******************************************************************
    ok: [localhost]
    
    TASK [Ensure the webserver is reachable] ***************************************
    ok: [localhost]
    
    PLAY RECAP *********************************************************************
    localhost                  : ok=2    changed=0    unreachable=0    failed=0
    servera.lab.example.com    : ok=6    changed=5    unreachable=0    failed=0
    • Note that Ansible starts by installing the packages, starting and enabling the services, and making sure the web server is reachable.

1.4. Evaluate Your Progress

  1. Grade your work:
    [student@workstation ~]$ lab manage-variables-playbooks grade
  2. Correct any reported failures.
  3. Rerun the script until successful.

1.5. Clean Up

  1. Undo the changes made to servera:
    [student@workstation ~]$ lab manage-variables-playbooks cleanup

2. Manage Facts

In this exercise, you gather Ansible facts from a managed host and use them in playbooks.

2.1. Set Up Environment

  1. Log in to workstation as student and run the lab setup script:
    [student@workstation ~]$ lab manage-variables-facts setup
    • The script creates the dev-vars-facts working directory and populates it with an Ansible configuration file and host inventory.
  2. Change to the ~/dev-vars-facts directory:
    [student@workstation ~]$ cd dev-vars-facts
    [student@workstation dev-vars-facts]$

2.2. Work With Facts

  1. Using the Ansible setup module, run an ad hoc command to retrieve the facts for all of the servers in the webserver group:
    [student@workstation dev-vars-facts]$ ansible webserver -m setup
    ... Output omitted ...
    servera.lab.example.com | SUCCESS => {
        "ansible_facts": {
            "ansible_all_ipv4_addresses": [
                "172.25.250.10"
            ],
            "ansible_all_ipv6_addresses": [
                "fe80::5054:ff:fe00:fa0a"
            ],
    ... Output omitted ...
    • The output displays all of the facts gathered for servera.lab.example.com in JSON format.
  2. Review the variables displayed.
  3. Filter the facts matching the ansible_user expression and append a wildcard to match all facts starting with ansible_user:
    [student@workstation dev-vars-facts]$ ansible webserver -m setup -a 'filter=ansible_user*'
    servera.lab.example.com | SUCCESS => {
        "ansible_facts": {
            "ansible_user_dir": "/root",
            "ansible_user_gecos": "root",
            "ansible_user_gid": 0,
            "ansible_user_id": "root",
            "ansible_user_shell": "/bin/bash",
            "ansible_user_uid": 0,
            "ansible_userspace_architecture": "x86_64",
            "ansible_userspace_bits": "64"
        },
        "changed": false
    }
  4. Create a fact file named custom.fact with the following content:
    [general]
    package = httpd
    service = httpd
    state = started
    • This defines the package to install and the service to start on servera.
  5. Create a setup_facts.yml playbook to create the /etc/ansible/facts.d remote directory and save the custom.fact file to it.
    ---
    - name: Install remote facts
      hosts: webserver
      vars:
        remote_dir: /etc/ansible/facts.d
        facts_file: custom.fact
      tasks:
        - name: Create the remote directory
          file:
            state: directory
            recurse: yes
            path: "{{ remote_dir }}"
        - name: Install the new facts
          copy:
            src: "{{ facts_file }}"
            dest: "{{ remote_dir }}"
  6. Using the setup module, run an ad hoc command to display only the ansible_localsection, which contains user-defined facts:
    [student@workstation dev-vars-facts]$ ansible webserver -m setup -a 'filter=ansible_local'
    servera.lab.example.com | SUCCESS => {
        "ansible_facts": {},
        "changed": false
    }
    • There should not be any custom facts at this point.
  7. Run the setup_facts.yml playbook:
    [student@workstation dev-vars-facts]$ ansible-playbook setup_facts.yml
    
    PLAY [Install remote facts] ****************************************************
    
    TASK [setup] *******************************************************************
    ok: [servera.lab.example.com]
    
    TASK [Create the remote directory] *********************************************
    changed: [servera.lab.example.com]
    
    TASK [Install the new facts] ***************************************************
    changed: [servera.lab.example.com]
    
    PLAY RECAP *********************************************************************
    servera.lab.example.com    : ok=3    changed=2    unreachable=0    failed=0
  8. Verify that the new facts have been properly installed:
    [student@workstation dev-vars-facts]$ ansible webserver -m setup -a 'filter=ansible_local'
    servera.lab.example.com | SUCCESS => {
        "ansible_facts": {
            "ansible_local": {
                "custom": {
                    "general": {
                        "package": "httpd",
                        "service": "httpd",
                        "state": "started"
                    }
                }
            }
        },
        "changed": false
    }
    • Expect the custom facts to appear.

2.3. Use Facts to Configure servera

It is now possible to create the main playbook that uses both default and user facts to configure servera. Over the next several steps, you add to the playbook file.

  1. Start the playbook.yml playbook file with the following:
    ---
    - name: Install Apache and starts the service
      hosts: webserver
  2. Create the first task, which installs the httpd package, using the user fact for the name of the package:
      tasks:
        - name: Install the required package
          yum:
            name: "{{ ansible_local.custom.general.package }}"
            state: latest
  3. Create another task that uses the custom fact to start the httpd service:
        - name: Start the service
          service:
            name: "{{ ansible_local.custom.general.service }}"
            state: "{{ ansible_local.custom.general.state }}"
  4. Review the playbook and ensure that all of the tasks are defined:
    ---
    - name: Install Apache and starts the service
      hosts: webserver
    
      tasks:
        - name: Install the required package
          yum:
            name: "{{ ansible_local.custom.general.package }}"
            state: latest
    
        - name: Start the service
          service:
            name: "{{ ansible_local.custom.general.service }}"
            state: "{{ ansible_local.custom.general.state }}"

2.4. Run Playbook

  1. Before running the playbook, use an ad hoc command to verify that the httpd service is not currently running on servera:
    [student@workstation dev-vars-facts]$ ansible servera.lab.example.com -m command -a 'systemctl status httpd'
    servera.lab.example.com | FAILED | rc=3 >>
    ● httpd.service
       Loaded: not-found (Reason: No such file or directory)
       Active: inactive (dead)
    ... Output omitted ...
  2. Run the playbook and watch the output as Ansible starts by installing the package, then enabling the service:
    [student@workstation dev-vars-facts]$ ansible-playbook playbook.yml
    
    PLAY [Install Apache and start the service] ************************************
    
    TASK [setup] *******************************************************************
    ok: [servera.lab.example.com]
    
    TASK [Install the required package] ********************************************
    changed: [servera.lab.example.com]
    
    TASK [Start the service] *******************************************************
    changed: [servera.lab.example.com]
    
    PLAY RECAP *********************************************************************
    servera.lab.example.com    : ok=3    changed=2    unreachable=0    failed=0
  3. Use an ad hoc command to check if the httpd service is now running on servera:
    [student@workstation dev-vars-facts]$ ansible servera.lab.example.com -m command -a 'systemctl status httpd'
    servera.lab.example.com | SUCCESS | rc=0 >>
    ● httpd.service - The Apache HTTP Server
       Loaded: loaded (/usr/lib/systemd/system/httpd.service; disabled; vendor preset: disabled)
       Active: active (running) since Mon 2016-05-16 17:17:20 PDT; 12s ago
         Docs: man:httpd(8)
               man:apachectl(8)
     Main PID: 32658 (httpd)
       Status: "Total requests: 0; Current requests/sec: 0; Current traffic:   0 B/sec"
       CGroup: /system.slice/httpd.service
    ... Output omitted ...

2.5. Evaluate Your Progress

  1. Grade your work:
    [student@workstation ~]$ lab manage-variables-facts grade
  2. Correct any reported failures.
  3. Rerun the script until successful.

2.6. Clean Up

  1. Clean up the lab environment:
    [student@workstation ~]$ lab manage-variables-facts cleanup

3. Manage Inclusions

In this exercise, you manage inclusions in Ansible playbooks. You create a task file, variable file, and playbook. The variable file defines, in YAML format, a variable used by the playbook. The task file defines the required tasks and includes variables that are passed later on as arguments.

3.1. Set Up Environment

  1. Log in to workstation as student and run the lab setup script:
    [student@workstation ~]$ lab manage-variables-inclusions setup
    • The script creates the dev-vars-inclusions working directory.
  2. Change to the ~/dev-vars-inclusions directory:
    [student@workstation ~]$ cd dev-vars-inclusions
    [student@workstation dev-vars-inclusions]$

3.2. Create Task File

  1. Create and change to the tasks directory:
    [student@workstation dev-vars-inclusions]$ mkdir tasks && cd tasks
    [student@workstation tasks]$
  2. In the tasks directory, create the environment.yml task file:
    1. Define the two tasks that install and start the web server.
    2. Use the package variable for the package name, service for the service name, andsvc_state for the service state.
      ---
        - name: Install the {{ package }} package
          yum:
            name: "{{ package }}"
            state: latest
        - name: Start the {{ service }} service
          service:
            name: "{{ service }}"
            state: "{{ svc_state }}"
  3. Change back to the main project directory:
    [student@workstation tasks]$ cd ..
    [student@workstation dev-vars-inclusions]$

3.3. Create Variable File

  1. Create and change to the vars directory:
    [student@workstation dev-vars-inclusions]$ mkdir vars
    [student@workstation dev-vars-inclusions]$ cd vars
    [student@workstation vars]$
  2. Create the variables.yml variable file with the following content:
    ---
    firewall_pkg: firewalld
    • The file defines the firewall_pkg variable in YAML format.
  3. Change back to the main project directory:
    [student@workstation vars]$ cd ..
    [student@workstation dev-vars-inclusions]$

3.4. Create Main Playbook

In this section, you create and edit the main playbook, playbook.yml, which imports the tasks and variables, and installs and configures the firewalld service.

  1. Add the webserver host group and define a rule variable with a value of http:
    ---
    - hosts: webserver
      vars:
        rule: http
  2. Define the first task with the include_vars module and the variables.yml variable file:
      tasks:
        - name: Include the variables from the YAML file
          include_vars: vars/variables.yml
    • The include_vars module imports extra variables that are used by other tasks in the playbook.
  3. Define a task that uses the include module to include the base environment.ymlplaybook:
    1. Because the three defined variables are used in the base playbook, but are not defined, include a vars block.
    2. Set three variables in the vars section:
          - name: Include the environment file and set the variables
            include: tasks/environment.yml
            vars:
              package: httpd
              service: httpd
              svc_state: started
  4. Create a task that installs the firewalld package using the firewall_pkg variable:
        - name: Install the firewall
          yum:
            name: "{{ firewall_pkg }}"
            state: latest
  5. Create a task that starts the firewalld service:
        - name: Start the firewall
          service:
            name: firewalld
            state: started
            enabled: true
  6. Create a task that adds a firewall rule for the HTTP service using the rule variable:
        - name: Open the port for {{ rule }}
          firewalld:
            service: "{{ rule }}"
            immediate: true
            permanent: true
            state: enabled
  7. Add a task that creates the index.html file for the web server using the copy module:
    1. Create the file with the Ansible ansible_fqdn fact, which returns the fully qualified domain name.
    2. Include a time stamp in the file using an Ansible fact.
          - name: Create index.html
            copy:
              content: "{{ ansible_fqdn }} has been customized using Ansible on the {{ ansible_date_time.date }}\n"
              dest: /var/www/html/index.html
    3. Confirm that playbook.yml appears as follows:
      ---
      - hosts: webserver
        vars:
          rule: http
        tasks:
          - name: Include the variables from the YAML file
            include_vars: vars/variables.yml
      
          - name: Include the environment file and set the variables
            include: tasks/environment.yml
            vars:
              package: httpd
              service: httpd
              svc_state: started
      
          - name: Install the firewall
            yum:
              name: "{{ firewall_pkg }}"
              state: latest
      
          - name: Start the firewall
            service:
              name: firewalld
              state: started
              enabled: true
      
          - name: Open the port for {{ rule }}
            firewalld:
              service: "{{ rule }}"
              immediate: true
              permanent: true
              state: enabled
      
          - name: Create index.html
            copy:
              content: "{{ ansible_fqdn }} has been customized using Ansible on the {{ ansible_date_time.date }}\n"
              dest: /var/www/html/index.html

3.5. Run Playbook

  1. Run the playbook and watch the output:
    [student@workstation dev-vars-inclusions]$ ansible-playbook playbook.yml
    PLAY ***********************************************************************
    
    TASK [setup] ***************************************************************
    ok: [servera.lab.example.com]
    
    TASK [Include the variables from the YAML file] ****************************
    ok: [servera.lab.example.com]
    
    TASK [Include the environment file and set the variables] ******************
    included: /home/student/dev-vars-inclusions/tasks/environment.yml
              for servera.lab.example.com
    
    TASK [Install and start the web server] ************************************
    changed: [servera.lab.example.com]
    
    TASK [Start the service] ***************************************************
    changed: [servera.lab.example.com]
    
    TASK [Install the firewall] ************************************************
    changed: [servera.lab.example.com]
    
    TASK [Start the firewall] **************************************************
    changed: [servera.lab.example.com]
    
    TASK [Open the port for http] **********************************************
    changed: [servera.lab.example.com]
    
    TASK [Create index.html] ***************************************************
    changed: [servera.lab.example.com]
    
    PLAY RECAP *****************************************************************
    servera.lab.example.com    : ok=9     changed=4    unreachable=0    failed=0
    • Note that Ansible starts by including the environment.yml playbook and running its tasks, then keeps executing the tasks defined in the main playbook.
  2. Use curl to confirm that the web server is reachable from workstation:
    [student@workstation dev-vars-inclusions]$ curl http://servera.lab.example.com
    servera.lab.example.com has been customized using Ansible on the 2016-03-31
    • You see this output because the index.html file was created.

3.6. Evaluate Your Progress

  1. Grade your work:
    [student@workstation ~]$ lab manage-variables-inclusions grade
  2. Correct any reported failures.
  3. Rerun the script until successful.

3.7. Clean Up

  1. Clean up the lab environment:
    [student@workstation ~]$ lab manage-variables-inclusions cleanup

4. Manage Variables and Inclusions

In this exercise, you deploy a database as well as a web server. You also define various Ansible modules, variables, and tasks.

4.1. Set Up Environment

  1. From workstation.lab.example.com, open a new terminal and run the setup script:
    [student@workstation ~]$ lab manage-variables setup
    • The script creates the lab-managing-vars project directory and populates it with an Ansible configuration file and host inventory.
  2. Create all of the files related to this lab in, or below, the lab-managing-vars project directory:
    [student@workstation ~]$ cd lab-managing-vars
    [student@workstation lab-managing-vars]$

4.2. Define Custom Facts

In this section, you create a facts file in INI format called custom.fact with a section calledpackages that contains two facts: db_package with a value of mariadb-server, andweb_package with a value of httpd. You also create a section called services with two facts: db_service with a value of mariadb, and web_service with a value of httpd. You then define a playbook, setup_facts.yml, that installs the facts on serverb.

  1. Create the custom.fact file with the following content:
    [packages]
    db_package = mariadb-server
    web_package = httpd
    
    [services]
    db_service = mariadb
    web_service = httpd
  2. Create the setup_facts.yml playbook to use the file and copy modules to install custom facts on the serverb.lab.example.com managed host:
    ---
    - name: Install remote facts
      hosts: lamp
      vars:
        remote_dir: /etc/ansible/facts.d
        facts_file: custom.fact
      tasks:
        - name: Create the remote directory
          file:
            state: directory
            recurse: yes
            path: "{{ remote_dir }}"
        - name: Install the new facts
          copy:
            src: "{{ facts_file }}"
            dest: "{{ remote_dir }}"

4.3. Install Facts

  1. Run the playbook to install the custom facts and verify that the facts are available as Ansible facts:
    [student@workstation lab-managing-vars]$ ansible-playbook setup_facts.yml
    PLAY [Install remote facts] ************************************************
    
    TASK [setup] ***************************************************************
    changed: [serverb.lab.example.com]
    
    TASK [Create the remote directory] *****************************************
    ok: [serverb.lab.example.com]
    
    TASK [Install the new facts] ***********************************************
    changed: [serverb.lab.example.com]
    
    PLAY RECAP *****************************************************************
    serverb.lab.example.com   : ok=3    changed=2    unreachable=0    failed=0
  2. Verify that the newly created facts can be retrieved:
    [student@workstation lab-managing-vars]$ ansible lamp -m setup -a 'filter=ansible_local*'
    serverb.lab.example.com | SUCCESS => {
        "ansible_facts": {
            "ansible_local": {
                "custom": {
                    "packages": {
                        "db_package": "mariadb-server",
                        "web_package": "httpd"
                    },
                    "services": {
                        "db_service": "mariadb",
                        "web_service": "httpd"
                    }
                }
            }
        },
        "changed": false
    }

4.4. Define Variables

In this section, you create a directory for variables, called vars, and define a YAML variable file, called main.yml, that defines a new variable called web_root, with a value of/var/www/html.

  1. Create the variables directory, vars, inside the project directory:
    [student@workstation lab-managing-vars]$ mkdir vars
  2. Create the vars/main.yml variables file with the following content:
    ---
    web_root: /var/www/html

4.5. Define Tasks

In this section, you create the tasks subdirectory, and then define a task file, calledmain.yml, that instructs Ansible to install the web server and database packages using facts gathered by Ansible from serverb.lab.example.com. You also have main.yml start the two services.

  1. Create the tasks subdirectory:
    [student@workstation lab-managing-vars]$ mkdir tasks
  2. Create the tasks/main.yml task file using the custom Ansible facts for the names of the services to manage:
    ---
      - name: Install and start the database and web servers
        yum:
          name:
            - "{{ ansible_local.custom.packages.db_package }}"
            - "{{ ansible_local.custom.packages.web_package }}"
          state: latest
    
      - name: Start the database service
        service:
          name: "{{ ansible_local.custom.services.db_service }}"
          state: started
          enabled: true
    
      - name: Start the web service
        service:
          name: "{{ ansible_local.custom.services.web_service }}"
          state: started
          enabled: true

4.6. Define Main Playbook

In this section, you create the main playbook, playbook.yml, with plays in the following order:

  • Target the lamp hosts groups
  • Define a new variable, firewall, with a value of firewalld
  • Define a task that includes the main.yml variable file
  • Define a task that includes the tasks defined in the tasks file
  • Define a task for installing the latest version of the firewall package
  • Define a task for for starting the firewall service
  • Define a task for opening TCP port 80 permanently
  • Define a task that uses the copy module to create the index.html page in the directory defined by the variable with the following content:
    serverb.lab.example.com (172.25.250.11) has been customized by Ansible
    • Note that the host name and the IP address need to use Ansible facts.

Follow these steps to create playbook.yml:

  1. Create playbook.yml for the hosts in the lamp hosts group and define the firewallvariable:
    ---
    - hosts: lamp
      vars:
        firewall: firewalld
  2. Add the tasks block and define the first task that includes the vars/main.ymlvariables file:
      tasks:
        - name: Include the variable file
          include_vars: vars/main.yml
  3. Create the task that imports the tasks/main.yml tasks file:
        - name: Include the tasks
          include: tasks/main.yml
  4. Create the tasks that install the firewall, start the service, open port 80, and reload the service:
        - name: Install the firewall
          yum:
            name: "{{ firewall }}"
            state: latest
    
        - name: Start the firewall
          service:
            name: "{{ firewall }}"
            state: started
            enabled: true
    
        - name: Open the port for the web server
          firewalld:
            service: http
            state: enabled
            immediate: true
            permanent: true
  5. Create the task that uses the copy module to create a custom main page, index.html, using the web_root variable defined in the variables file for the home directory of the web server:
        - name: Create index.html
          copy:
            content: "{{ ansible_fqdn }}({{ ansible_default_ipv4.address }}) has been customized by Ansible\n"
            dest: "{{ web_root }}/index.html"
  6. Confirm that the tree appears as follows:
    [student@workstation lab-managing-vars]$ tree
    .
    ├── ansible.cfg
    ├── custom.fact
    ├── inventory
    ├── playbook.yml
    ├── setup_facts.yml
    ├── tasks
    │   └── main.yml
    └── vars
        └── main.yml
    
    2 directories, 7 files
  7. Confirm that the main playbook appears as follows:
    ---
    - hosts: lamp
      vars:
        firewall: firewalld
    
      tasks:
        - name: Include the variable file
          include_vars: vars/main.yml
    
        - name: Include the tasks
          include: tasks/main.yml
    
        - name: Install the firewall
          yum:
            name: "{{ firewall }}"
            state: latest
    
        - name: Start the firewall
          service:
            name: "{{ firewall }}"
            state: started
            enabled: true
    
        - name: Open the port for the web server
          firewalld:
            service: http
            state: enabled
            immediate: true
            permanent: true
    
        - name: Create index.html
          copy:
            content: "{{ ansible_fqdn }}({{ ansible_default_ipv4.address }}) has been customized by Ansible\n"
            dest: "{{ web_root }}/index.html"

4.7. Run Playbook and Test Deployment

  1. Run the playbook:
    [student@workstation lab-managing-vars]$ ansible-playbook playbook.yml
    PLAY ***********************************************************************
    
    ... Output omitted ...
    
    PLAY RECAP *****************************************************************
    serverb.lab.example.com    : ok=10    changed=5    unreachable=0    failed=0
  2. From workstation, use curl to ensure that the web server started successfully and is reachable:
    [student@workstation lab-managing-vars]$ curl http://serverb
    serverb.lab.example.com(172.25.250.11) has been customized by Ansible
    • This message indicates that the web server is installed, the firewall was updated with a new rule, and the included variable was successfully used.
  3. Use an ad hoc Ansible command to ensure that the mariadb service is running onserverb.lab.example.com:
    [student@workstation lab-managing-vars]$ ansible lamp -a 'systemctl status mariadb'
    serverb.lab.example.com | SUCCESS | rc=0 >>
    ● mariadb.service - MariaDB database server
       Loaded: loaded (/usr/lib/systemd/system/mariadb.service; disabled; vendor preset: disabled)
       Active: active (running) since Fri 2016-04-01 10:50:40 PDT; 7min ago
    ... Output omitted ...

4.8. Evaluate Your Progress

  1. Grade your work:
    [student@workstation ~]$ lab manage-variables grade
  2. Correct any reported failures.
  3. Rerun the script until successful.

4.9. Clean Up

  1. Clean up the lab environment:
    [student@workstation ~]$ lab manage-variables cleanup

Running Ansible

Ansible Usage Lab

In this lab, you learn how to use Ansible ad hoc commands and the ansible-playbook command line.

1. Use Ad Hoc Commands

In this exercise, you use Ansible ad hoc commands to do the following:

  • Structure a basic inventory of your managed hosts
  • Test their reachability
  • Install a web server
  • Start the web server

1.1. Access Environment

  1. Connect to the control node:
    # ssh your-sso-login@workstation-GUID.rhpds.opentlc.com
  2. Become the root user:
    # sudo -i

1.2. Create an Inventory File

  1. Edit the default inventory file, /etc/ansible/hosts:
    # vi /etc/ansible/hosts
  2. Add the following lines to assign servera to the web group and serverb to the sql group:
    [web]
    servera.example.com
    
    [sql]
    serverb.example.com
  3. Use an ad hoc command to test the inventory file:
    # ansible -m ping all
    servera.example.com | UNREACHABLE! => {
        "changed": false,
        "msg": "Failed to connect to the host via ssh: Warning: Permanently added 'servera.example.com' (ECDSA) to the list of known hosts.\r\nPermission denied (publickey,gssapi-keyex,gssapi-with-mic).\r\n",
        "unreachable": true
    }
    serverb.example.com | UNREACHABLE! => {
        "changed": false,
        "msg": "Failed to connect to the host via ssh: Warning: Permanently added 'serverb.example.com' (ECDSA) to the list of known hosts.\r\nPermission denied (publickey,gssapi-keyex,gssapi-with-mic).\r\n",
        "unreachable": true
    }
    • As you can see, the control node needs to be able to access the managed hosts using SSH, which is covered in the next section.

1.3. Propagate SSH Key to Managed Hosts

  1. From the root user home folder (/root), copy the root SSH key to servera and serverb. The root password for the servers is redhat.
    # ssh-copy-id -i .ssh/open servera.example.com
    /bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
    /bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
    root@servera.example.com's password:
    
    Number of key(s) added: 1
    # This is the default ansible 'hosts' file.
    
    Now try logging into the machine, with:   "ssh 'servera.example.com'"
    and check to make sure that only the key(s) you wanted were added.
    
    
    
    # ssh-copy-id -i .ssh/open serverb.example.com
    /bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
    /bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
    root@serverb.example.com's password:
    
    Number of key(s) added: 1
    
    Now try logging into the machine, with:   "ssh 'serverb.example.com'"
    and check to make sure that only the key(s) you wanted were added.
  2. Use an ad hoc command to test the inventory file again:
    # ansible -m ping all
    servera.example.com | SUCCESS => {
        "changed": false,
        "ping": "pong"
    }
    serverb.example.com | SUCCESS => {
        "changed": false,
        "ping": "pong"
    }
    • Expect the test to succeed this time.

1.4. Gather Facts

  1. Use an ad hoc command to get the list of facts available for servera:
    # ansible web -m setup
    servera.example.com | SUCCESS => {
        "ansible_facts": {
            "ansible_all_ipv4_addresses": [
                "192.168.1.21"
            ],
            "ansible_all_ipv6_addresses": [
                "fe80::2ec2:60ff:fe22:53a9"
            ],
            "ansible_architecture": "x86_64",
    
    [...omitted output...]
    
            "ansible_virtualization_role": "guest",
            "ansible_virtualization_type": "kvm",
            "module_setup": true
        },
        "changed": false
    }
    • This JSON-formatted list contains the available facts that Ansible can eventually use.

1.5. Install a Package

  1. Install an Apache server on the web server (servera):
    # ansible web -b -m yum -a "name=httpd state=present"
    servera.example.com | SUCCESS => {
        "changed": true,
        "msg": "",
        "rc": 0,
        "results": [
            "Loaded plugins: search-disabled-repos\nResolving Dependencies\n--> Running transaction check\n---> Package httpd.x86_64 0:2.4.6-45.el7 will be installed\n--> Processing Dependency: httpd-tools = 2.4.6-45.el7 for package: httpd-2.4.6-45.el7.x86_64\n--> Processing Dependency: /etc/mime.types for package: httpd-2.4.6-45.el7.x86_64\n--> Processing Dependency: libaprutil-1.so.0()(64bit) for package: httpd-2.4.6-45.el7.x86_64\n--> Processing Dependency: libapr-1.so.0()(64bit) for package: httpd-2.4.6-45.el7.x86_64\n--> Running transaction check\n---> Package apr.x86_64 0:1.4.8-3.el7 will be installed\n---> Package apr-util.x86_64 0:1.5.2-6.el7 will be installed\n---> Package httpd-tools.x86_64 0:2.4.6-45.el7 will be installed\n---> Package mailcap.noarch 0:2.1.41-2.el7 will be installed\n--> Finished Dependency Resolution\n\nDependencies Resolved\n\n================================================================================\n Package         Arch       Version           Repository                   Size\n================================================================================\nInstalling:\n httpd           x86_64     2.4.6-45.el7      rhelosp-rhel-7.3-server     1.2 M\nInstalling for dependencies:\n apr             x86_64     1.4.8-3.el7       rhelosp-rhel-7.3-server     103 k\n apr-util        x86_64     1.5.2-6.el7       rhelosp-rhel-7.3-server      92 k\n httpd-tools     x86_64     2.4.6-45.el7      rhelosp-rhel-7.3-server      84 k\n mailcap         noarch     2.1.41-2.el7      rhelosp-rhel-7.3-server      31 k\n\nTransaction Summary\n================================================================================\nInstall  1 Package (+4 Dependent packages)\n\nTotal download size: 1.5 M\nInstalled size: 4.3 M\nDownloading packages:\n--------------------------------------------------------------------------------\nTotal                                              1.4 MB/s | 1.5 MB  00:01     \nRunning transaction check\nRunning transaction test\nTransaction test succeeded\nRunning transaction\n  Installing : apr-1.4.8-3.el7.x86_64                                       1/5 \n  Installing : apr-util-1.5.2-6.el7.x86_64                                  2/5 \n  Installing : httpd-tools-2.4.6-45.el7.x86_64                              3/5 \n  Installing : mailcap-2.1.41-2.el7.noarch                                  4/5 \n  Installing : httpd-2.4.6-45.el7.x86_64                                    5/5 \n  Verifying  : httpd-tools-2.4.6-45.el7.x86_64                              1/5 \n  Verifying  : apr-1.4.8-3.el7.x86_64                                       2/5 \n  Verifying  : mailcap-2.1.41-2.el7.noarch                                  3/5 \n  Verifying  : httpd-2.4.6-45.el7.x86_64                                    4/5 \n  Verifying  : apr-util-1.5.2-6.el7.x86_64                                  5/5 \n\nInstalled:\n  httpd.x86_64 0:2.4.6-45.el7                                                   \n\nDependency Installed:\n  apr.x86_64 0:1.4.8-3.el7                 apr-util.x86_64 0:1.5.2-6.el7       \n  httpd-tools.x86_64 0:2.4.6-45.el7        mailcap.noarch 0:2.1.41-2.el7       \n\nComplete!\n"
        ]
    }
    For information on the -b and -m options, check the documentation.
  2. Manually confirm that the package was installed:
    # ssh servera.example.com "rpm -qa | grep httpd-[0-9]*"
    httpd-tools-2.4.6-45.el7.x86_64
    httpd-2.4.6-45.el7.x86_64

1.6. Start a Service

  1. Start the Apache server:
    # ansible web -b -m service -a "name=httpd state=started"
    servera.example.com | SUCCESS => {
        "changed": true,
        "name": "httpd",
        "state": "started",
        "status": {
            "ActiveEnterTimestampMonotonic": "0",
            "ActiveExitTimestampMonotonic": "0",
            "ActiveState": "inactive",
    [...omitted output...]
            "UnitFilePreset": "disabled",
            "UnitFileState": "disabled",
            "Wants": "system.slice",
            "WatchdogTimestampMonotonic": "0",
            "WatchdogUSec": "0"
        },
        "warnings": []
    }
  2. Manually check that the server was started:
    # ssh servera.example.com "systemctl status httpd"
    ● httpd.service - The Apache HTTP Server
       Loaded: loaded (/usr/lib/systemd/system/httpd.service; disabled; vendor preset: disabled)
       Active: active (running) since Mon 2017-02-27 17:20:16 EST; 6min ago
    [...omitted output...]

2. Use ansible-playbook Command Line

In this exercise, you use the Ansible ansible-playbook command line to do the following:

  • Structure a basic playbook
  • Install a web server
  • Start the web server

2.1. Create a Simple Playbook

  1. Create a site.yml file containing the playbook you reviewed in Lab 3:
    ---
    - hosts: web
      name: Install the web server and start it
      become: yes
      vars:
        httpd_packages:
          - httpd
          - mod_wsgi
        apache_test_message: This is a test message
        apache_max_keep_alive_requests: 115
    
      tasks:
        - name: Install the apache web server
          yum:
            name: "{{ item }}"
            state: present
          with_items: "{{ httpd_packages }}"
          notify: restart apache service
    
        - name: Generate apache's configuration file from jinja2 template
          template:
            src: templates/httpd.conf.j2
            dest: /etc/httpd/conf/httpd.conf
          notify: restart apache service
    
        - name: Generate a basic homepage from jinja2 template
          template:
            src: templates/index.html.j2
            dest: /var/www/html/index.html
    
        - name: Start the apache web server
          service:
            name: httpd
            state: started
            enabled: yes
    
      handlers:
        - name: restart apache service
          service:
            name: httpd
            state: restarted
            enabled: yes

2.2. Add Jinja2 Template Files

You can learn more about jinja2 templates here.

In this section, you add the Jinja2 template files needed for the configuration of the web server.

  1. Create a templates folder:
    $ mkdir templates
  2. Add the httpd.conf.j2 template to the folder:
    ServerRoot "/etc/httpd"
    Listen 80
    Include conf.modules.d/*.conf
    User apache
    Group apache
    ServerAdmin root@localhost
    <Directory />
        AllowOverride none
        Require all denied
    </Directory>
    DocumentRoot "/var/www/html"
    <Directory "/var/www">
        AllowOverride None
        Require all granted
    </Directory>
    <Directory "/var/www/html">
        Options Indexes FollowSymLinks
        AllowOverride None
        Require all granted
    </Directory>
    <IfModule dir_module>
        DirectoryIndex index.html
    </IfModule>
    <Files ".ht*">
        Require all denied
    </Files>
    ErrorLog "logs/error_log"
    MaxKeepAliveRequests {{ apache_max_keep_alive_requests }}
    LogLevel warn
    <IfModule log_config_module>
        LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined
        LogFormat "%h %l %u %t \"%r\" %>s %b" common
        <IfModule logio_module>
          LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %I %O" combinedio
        </IfModule>
        CustomLog "logs/access_log" combined
    </IfModule>
    <IfModule alias_module>
        ScriptAlias /cgi-bin/ "/var/www/cgi-bin/"
    </IfModule>
    <Directory "/var/www/cgi-bin">
        AllowOverride None
        Options None
        Require all granted
    </Directory>
    <IfModule mime_module>
        TypesConfig /etc/mime.types
        AddType application/x-compress .Z
        AddType application/x-gzip .gz .tgz
        AddType text/html .shtml
        AddOutputFilter INCLUDES .shtml
    </IfModule>
    AddDefaultCharset UTF-8
    <IfModule mime_magic_module>
        MIMEMagicFile conf/magic
    </IfModule>
    EnableSendfile on
    IncludeOptional conf.d/*.conf
  3. Add the index.html.j2 template to the folder:
    {{ apache_test_message }} {{ ansible_distribution }} {{ ansible_distribution_version }}  <br>
    Current Host: {{ ansible_hostname }} <br>
    Server list: <br>
    {% for host in groups['web'] %}
    {{ host }} <br>
    {% endfor %}
  4. Confirm that the folder structure looks like this:
    └── templates
        ├── httpd.conf.j2
        └── index.html.j2

2.3. Run Playbook

  1. Run the Ansible playbook using the ansible-playbook command:
    # ansible-playbook site.yml
    
    PLAY [Install the web server and start it] *************************************
    
    TASK [setup] *******************************************************************
    ok: [servera.example.com]
    
    TASK [Install the apache web server] *******************************************
    changed: [servera.example.com] => (item=[u'httpd', u'mod_wsgi'])
    
    TASK [Generate apache's configuration file from jinja2 template] ***************
    changed: [servera.example.com]
    
    TASK [Generate a basic homepage from jinja2 template] **************************
    ok: [servera.example.com]
    
    TASK [Start the apache web server] *********************************************
    changed: [servera.example.com]
    
    RUNNING HANDLER [restart apache service] ***************************************
    changed: [servera.example.com]
    
    PLAY RECAP *********************************************************************
    servera.example.com        : ok=6    changed=4    unreachable=0    failed=0
  2. Confirm that the web server responds and serves the index.html file generated by Ansible:
    # curl servera.example.com
    This is a test message RedHat 7.3  <br>
    Current Host: servera <br>
    Server list: <br>
    servera.example.com <br>

Install Ansible

 Install Ansible

  1. Connect to the control node (workstation):
    # ssh your-sso-login@workstation-GUID.rhpds.opentlc.com
  2. Become the root user:
    # sudo -i
    • Attach the Extra Packages pool to make Ansible packages available:
      # subscription-manager attach --pool=`subscription-manager list --all --available --matches "*Extra Packages*" --pool-only`
  1. Install Ansible:
    # yum -y install ansible
  2. Check that Ansible is installed and usable:
    # ansible --version
    ansible 2.2.1.0
      config file = /etc/ansible/ansible.cfg
      configured module search path = Default w/o overrides