layout: post title: Lessons Learned From Ansible date: 2017-01-31 18:00:00.000000000 -05:00 type: post published: true status: publish categories: - Ansible - Lessons Learned tags: [Ansible, lessons] author: warren5236 —

Ansible Logo

I’m slowly converting all my servers to use Ansible for setup and deployment. I wanted to share some of the things I’ve learned over the past couple months as I’ve worked my way through this process.

Why Use Ansible?

Most projects start with a single server. You install your packages and you move on. As your site grows and you need more servers (more front end servers, dedicated development machines, dedicated testing and staging servers) you need to have a more uniform way of setting them up so you don’t suffer from the “works on my machine” problem. Ansible allows you to specify what should be installed in what order to make sure you have environments that match.

I’m a strong believer that everyone should be using a VM to do their development work. This allows you make changes to your local environment without effecting anyone else who’s using the system. Vagrant has been a HUGE help in this regard because you don’t need to spell out the fifty plus steps necessary to get a development VM up and running.

We’ve been using PuPHPet with Vagrant for a while now to create our development VMs. I liked the process so much that I’ve written some posts on how to extend it for purposes outside the normal scope of PuPHPet. The PuPHPet of today is nothing like the PuPHPet when I started using it. We’re to the point where everything is controlled by a YAML file and it’s much more difficult to make changes (that being said I would still suggest it for most use cases). We got to the point where we had to develop custom scripts that would allow us to setup things outside the scope of PuPHPet. Because of this we had to look into other solutions.

Through the years, I’ve listened to FLOSS Weekly Podcasts where they discussed Ansible, Chef, and Puppet so I had the kernel of most of these ideas in my head. I spend a couple weeks playing with each one and I liked how Ansible was structured and how it didn’t need a central server to function so I started using it first for our development servers and then our production servers.


Ansible allows you to break playbooks up into reusable chunks called roles. When I first started out writing my Ansible playbooks I used this term as the way I would expect a role would be used. I would say this server gets the web server role and then I would create a full list of everything it needed to do and have it install those. The playbooks ended up containing a lot of duplication so I knew I was doing something wrong.

Then I realized roles are more nuanced than that and they’re can actually be treated like requirements. One of my projects requires a server that has the same web site with three different domain names (sales, training, and test). By creating a role for the site and telling it what it needs to function (a virtual host, a MySQL database, several worker queue, etc) it’s very easy to add them all to the playbook and have it create everything that’s needed:

 - {role: website, env: production, path: /production, domain:}
 - {role: website, env: test, path: /test, domain:}

Then if the site needs something else needs to be added a quick change to the role make the change in every site.

The other plus to this is that we can use the same role definition to create our development VM.


After spending a large amount of time testing my Ansible setup on a local VM I was ready to deploy it to a cloud server. The playbook ran to about 50% completion and then ran into an error. I fixed the problem but the server stopped responding to any request. After a lot of troubleshooting it was because I enabled the firewall but the rule to allow my computer to administrate it was after the error. To fix this I moved all the firewall rules into their own role which became the very first thing that’s run.


Learning how to use dependencies saved me a lot of work. We’re using monit to manage some long running background jobs. Each site gets its own set (as they have different working folders and configuration files). My original solution for this was to add the background jobs to the playbook directly but it would get complicated:

 - {role: website, env: production, path: /production, servername:}
 - {role: backgroundJob, env: production, script: s1.php, path: /production}
 - {role: backgroundJob, env: production, script: s2.php, path: /production}
 - {role: backgroundJob, env: production, script: s3.php, path: /production}
 - {role: backgroundJob, env: production, script: s4.php, path: /production}
 - {role: backgroundJob, env: production, script: s5.php, path: /production}
 - {role: backgroundJob, env: production, script: s6.php, path: /production}
 - {role: website, env: test, path: /test, servername:}
 - {role: backgroundJob, env: test, script: s1.php, path: /test}
 - {role: backgroundJob, env: test, script: s2.php, path: /test}
 - {role: backgroundJob, env: test, script: s3.php, path: /test}
 - {role: backgroundJob, env: test, script: s4.php, path: /test}
 - {role: backgroundJob, env: test, script: s5.php, path: /test}
 - {role: backgroundJob, env: test, script: s6.php, path: /test}

This presented two problems. The first being that the above is extremely hard for a human to parse and it would be very easily to have a typo in that wall of text. The second is that every time we added a new script that needed to be run in the background it needed to be added in multiple locations.

To solve this problem I created a role that wraps up all these items. I started out by creating a new role called frontendserver that had a folder named meta with a main.yml that looked like the following:

 - {role: website}
 - {role: backgroundJob, script: script1.php}
 - {role: backgroundJob, script: script2.php}
 - {role: backgroundJob, script: script3.php}
 - {role: backgroundJob, script: script4.php}
 - {role: backgroundJob, script: script5.php}
 - {role: backgroundJob, script: script6.php}

Then in my playbook I can simplify the rules down to the following:

 - {role: frontendserver, env: production, path: /production, servername:}
 - {role: frontendserver, env: test, path: /test, servername:}

If you do this process of having multiple copies of the same role in a role you need to edit the meta/main.yml file for the dependent role and add the following:

allow_duplicates: yes

Learn to Love with_items

When I started using Ansible (at the very beginning) I installed each package as it’s own task:

- name: Install apache2
  apt: pkg=apache2 state=latest
- name: Install zip
  apt: pkg=zip state=latest
- name: Install build-essential
  apt: pkg=build-essential state=latest

This does work and it’s easier to see where you’re at in the process (running them all together can cause a delay if you’re installing multiple large packages) but it’s harder to maintain in the long run (and even in the short run).

Ansible allows you use the with_items property to group them into a single command.

- name: Install Packages
  apt: pkg= state=latest
    - apache2
    - zip
    - build-essential

Then adding additional items just involves adding a new line.

- name: Install Packages
  apt: pkg= state=latest
    - apache2
    - zip
    - build-essential
    - git

Another helpful feature is that you can specify groups from your inventory to use. For example, all the webservers should have access to the MySQL servers.

- name: Allow Webservers Access
  ufw: rule=allow port=3306 src=
  with_items: "{{groups['webservers']}}"

j2 files

Ansible provides the ability to use templates so you can replace specific parts of a file with variables.

For example, this is a very slimmed down version of a config file for an Apache site:

<VirtualHost *:80>
    ServerAdmin webmaster
    DocumentRoot {{path}}
    ServerName {{servername}}

    ErrorLog  /var/log/apache2/{{servername}}-error_log
    CustomLog /var/log/apache2/{{servername}}-access_log common

In the tasks file for the role we’ll tell Ansible where to put the file and it will automatically create the file with the correct values by replacing the variables (denoted by the {{ and }}) with values we pass from other places.

- name: Add Site Config
  action: template src=site.cnf.j2 dest=/etc/apache2/sites-available/{{servername}}.conf

Setup vs Deploy Script

For a while we were using Ansible to both setup the server and deploy the site (we have changed to Deployer because we found it easier to use). If you do this it’s important to separate out the items you need to deploy the site into it’s own playbook so it’s faster. The process kept taking longer and longer the more items we had and that made it annoying to wait before we could tell people bugs were fixed.

It Can be Slow

I’m very happy with Ansible but as the number of tasks increases the longer it takes to run those tasks. I understand the logic of it all but it does get to be annoying when you’re testing a change. I’ve gotten to the point where I have a specific rule inside my playbook so I can test just one role at a time but it does take time.

In Closing

This is just the list of interesting things I’ve found so far. I have plans to write up how to create your own Ansible setup in the near future (it’s in the planning phase now) so follow us on Twitter or Facebook to get an update when we start publishing them.