In our introductory article about Ansible, we provided a high-level overview of the Ansible architecture. In today’s article, we are going to work on the Ansible Automation Engine installation on the control node, which is the server running Ansible binary files. After that, we will configure the managed nodes, which are the hosts that will have their configuration changed based on the actions that we are going to define through either the command line or playbooks. At the end of this article, we will be able to run our Ansible environment, understand how to manage inventory from a simple point of view (just adding a couple of nodes), and execute commands in several managed nodes from a central location.
Installing the Ansible Automation Engine in the control node
The first step is to install the Ansible Automation Engine in the control node. The process is simple, and we just need to execute the command below and wait for the completion.
sudo yum install ansible -y
The installation process will create a folder structure under /etc/ansible (Item 1), and we will have two essential files: ansible.cfg and hosts. The first is the Ansible configuration, which has some parameters to control the execution of playbooks. The hosts file is the Inventory component that we discussed briefly in the previous article, and we are going to cover in more detail in this article.
In the roles folder, we can create roles that are a set of playbooks, files, handlers, and variables that help to share that information using a simple-to-use file structure. Every folder within the roles folder is a new role with its folder structure.
We can also check the current Ansible version by running the following command:
These steps must be performed on the control node, and future managed nodes. We are going to create an account to be used by Ansible, and we are going to use these following commands to accomplish the task:
sudo useradd svc.ansible sudo passwd svc.ansible
Creating the Ansible account on control node and managed nodes
The next step is to edit the sudoers file, which allows regular users to perform some actions that are allowed only by privileged users and audit those actions. The best way to manage the file is by using visudo, which contains some validation of syntax and prevents multiple users from editing the file.
Run sudo visudo, and go to the line root ALL=(ALL) ALL and add a line to reflect our Ansible user (Item 1). To save your changes, hit Escape and then type :wq
Note: If you want to do a shortcut, you can type yy to copy the entire line and then p, the result will be a duplication of the line when using vi editor. We wrote an article here at TechGenix to speed up your use of vi, check it here.
Configuring SSH in the control node
In this section, we are going to create a pair of keys that will be used by the control node to authenticate to the Linux servers to execute the playbooks/commands required by the Ansible administrator.
The first step is to validate the current user, and we are going to do that executing whoami (Item 1). Our next step is to connect using the Ansible account. Then, we are going to log on using the svc.ansible account by typing su – svc.ansible (Item 2) and credentials (Item 3).
The next step is to create a set of private and public keys to support our ssh connections from the control node to the managed nodes. By default, both keys will be placed in the /home/svc.ansible/.ssh folder (Item 5)
The next step is copying the public key generated on the control node to the managed nodes (including localhost). However, we must make sure that we can resolve the names of our servers. We have all three servers running in Azure, and we will be using /etc/hosts in the control node to add the names and IPs of our VMs.
The final step is to run ssh-copy-id <ManagedNode> (Item 1), confirm when prompted (Item 2), and type in the password (Item 3). That would be enough to allow secure communication from the control node to the managed node without a password requirement because the keys will be enough.
We are going to cover in detail the inventory component in our next file. For now, we just need to add the servers that we configured so far in the /etc/ansible/hosts file, as depicted in the image below.
Using Ansible ad-hoc command
There are few ways to interact and execute commands and operations on the managed nodes using Ansible. We will use in this article the ad-hoc approach, which allows the Ansible administrator to execute commands from the command line to one or several managed nodes.
The ad-hoc command uses the Ansible binary file. The syntax is consistent, and we will be using the following format when issuing commands, as depicted in the image below.
The beauty of ad-hoc in Ansible is that we can execute a simple task on a group of servers from a single line, which saves tons of time at the end of the day.
Let’s use a simple example. We want to check all servers in our inventory and validate the content of the /etc/hosts file. Without Ansible, we have to log on to each server and check the content of the desired file, or connect remotely to the servers and then retrieve the content.
We will execute three different tasks in three lines. We will print the hostname of all servers, check the content of the /etc/hosts file, and last but not least, we will ping those servers to check if they are up and running.
ansible all -m shell -a “hostname” ansible all -m shell -a “cat /etc/hosts” ansible all -m ping
Up next: Adding more modules
In this article about Ansible, we covered the installation of Ansible Automation Engine on a Linux server, created an Ansible service account, and enable the communication between the control node and managed nodes.
The last piece of information was how to start using ad-hoc commands on top of the infrastructure that we deployed so far by checking files and executing basic commands from the command line. We touched two modules that are key when using Ansible, which are ping and shell modules.
As we move forward in Ansible we will be adding more modules and explore the process of using Ansible playbooks to step up our game when managing multiple nodes and creating recipes to deploy predefined configurations to our managed nodes.
Featured image: Shutterstock / TechGenix photo illustration