-
Notifications
You must be signed in to change notification settings - Fork 2
Ansible core concepts
We will not re-write the Ansible docs here, as they are pretty well written and useful as they are. However, there's a lot to take in, so we cover the key points on this page and how they apply to ce-deploy
.
Ansible needs a few critical things to function:
-
ansible.cfg
- its main configuration file, which tells it:- paths to roles
- paths to plugins
- paths to modules
- paths to inventory files
- plugins to enable
- other key behaviours (critically, in our case, to merge duplicate variables rather than the Ansible default of replacing them - this is how our variable inheritance works)
- inventory information - what computers it should be acting on, where they are and what their defaults are
By default Ansible will look through a number of paths for configuration information, more info here on the Ansible website but effectively we make use of the fact it looks in the current directory before it starts hunting through the system. So we place our config file and inventory and so on in the directory we will execute Ansible from: the install directory of the product.
ce-deploy
, if installed via ce-provision
, ships with a sample ansible.cfg
file. It gets copied into place by the ce_deploy
role in ce-provision
. However, as you can imagine, every set of infrastructures you deploy to will have different requirements, inventory and settings. Indeed, you may have noticed in the ce_deploy
role of ce-provision
this when
clause under the code that copies the ansible.cfg
files into place:
when: not ce_deploy_has_config_repo
So that only happens when there is no "config repo" provided. But what does that mean?
If you look at the installation role in ce-provision
you will spot this ce_deploy
role behaviour.
It creates a set of symbolic links from {{ ce_deploy.local_dir }}/config
to {{ ce_deploy.local_dir }}
(which is the variable containing the install directory). Effectively, ce-provision
expects us to provide a config repository, which contains our unique Ansible settings, inventory and anything else special, it expects the contents of that repo to be consistent and for them to be placed in the config
directory of wherever we're installing whichever product. Indeed, this is where it fetches the config repo.
So to work through specific examples using defaults, and using hostnames from our internal "dummy" infrastructure we test on:
- it is installed to
/home/deploy/ce-deploy
on a deploy server, in this casedummy-deploy1
, and we will execute Ansible from there - consequently the config repo, as specified, will get checked out to
/home/deploy/ce-deploy/config
- the role will link
/home/deploy/ce-deploy/config/ansible.cfg
to/home/deploy/ce-deploy/ansible.cfg
- it will also link the
/home/deploy/ce-deploy/config/hosts
directory with our specific inventory information to/home/deploy/ce-deploy/hosts
- so when we execute Ansible with
ce-deploy
from the/home/deploy/ce-deploy
directory Ansible will automatically use theansible.cfg
file it finds there, which will automatically point it at the correct inventory location and other settings
So that's what we mean by a "config repo" - a Git repository with a pre-determined structure that ce-provision
can check out into the installation directory, so that configuration gets used when Ansible is invoked from that installation directory.
For completeness, the CE GitHub account contains some example config repositories as a quick start for new projects. Here is the example for ce-deploy
:
The simplest way to describe Galaxy is as a package manager for Ansible. You can use it to install third party roles, modules and plugins. Like some other package managers, who you install as and where you run the installation dictates where any installed Galaxy roles go. The possibilities are:
- the local
roles
directory (e.g. if you runansible-galaxy
in thece-deploy
directory it will use./roles
) - the local user's
.ansible
directory (e.g. you have no Ansible context when you run the command, but if you're logged in as thedeploy
user then things will be installed in/home/deploy/.ansible
directory) - the system Ansible directory,
/etc/ansible
(e.g. you run the command asroot
or withsudo
)
For our purposes, and because we do not want to commit Galaxy code to our repositories, we always go for the second of the above options, run as the deploy
user but outside of the ce-deploy
directory. We want our chosen Galaxy roles and collections installing to /home/deploy/.ansible
- this keeps elegant separation between stuff we download and stuff we wrote, which will be in /home/deploy/ce-provision
. If you manually install Galaxy roles somewhere to test, please do so as the deploy
user and outside the ce-deploy
directory, or from your Python venv if applicable.
ce-provision
can handle installing Galaxy roles in ce-deploy
for you in a few ways. Firstly, you can use Ansible's default meta
behaviour for loading dependencies. This is what we do in our ce_deploy
role, if you want to add a Galaxy role that will be useful to ce-deploy
as a product, just add it to the requirements files, such as this one for Debian 11 (Bullseye):
If you're adding some Galaxy code that only you need, that will not form a part of the central project, ce-provision
supports loading in a requirements file that is kept elsewhere. You can see the task that does that here. So if you keep a requirements file in your own config repo, you can reference the path to that file to install it.
Normally inventory would be manually created, it can be stored in a file in the hosts
directory and can be in either YAML or JSON format. We have an example in our example config repo.
You can set default variables for inventory and inventory groups. You'll notice in that directory in our example config repo there is a subdirectory:
There is more on organising group variables here. As you can see from the docs, it is also possible to use host_vars
to specify variables for a specific machine.
You can also use inventory plugins. Our most commonly used one is the AWS plugin. Find out more about automated AWS inventory.