BigPanda blog
5 Reasons We Love Using Ansible for Continuous Delivery

We found the classic solutions, Chef and Puppet, to be unsatisfactory for our needs. While these tools are excellent for infrastructure deployments, they are too complex and lack the flexibility we were looking for to support our application deployments. Then we came across Ansible and it was love at first sight. Here are five reasons why we will never look back:
1. Simplicity
To start using Ansible, a laptop with Python installed and a playground server with nothing more than SSHD was all that was needed. Ansible is agentless, which means that there’s no need to install anything on your servers in order to use it. Setting up Ansible is, therefore, just a matter of minutes.
Armed with only a laptop and Ansible’s online documentation, we wrote our first playbook. A playbook, for those unfamiliar with Ansible, is basically a list of tasks that should run on one or more servers. It’s equivalent to a “Recipe” in Chef and a “Manifest” in Puppet.
Playbook semantics are so intuitive and easy to write that we no longer bother connecting to servers and running commands manually through the shell. In addition, Ansible is rich with built-in modules that help you do anything from trivial tasks such as installing a package to more complex tasks such as auto-provisioning Amazon EC2 instances. No matter the complexity of the tasks, it all boils down to running an Ansible module, which makes playbooks easy to write and easy to review later.
Know that stupid smile one gets when things that were supposed to be complicated just work right out-of-the-box? That’s the expression you would have found on our DevOps guy as he first experienced Ansible. DevOps bliss.
2. Declarative
Ansible’s philosophy is that playbooks (whether for server provisioning, server orchestration or application deployment) should be declarative. This means that writing a playbook does not require any knowledge of the current state of the server, only its desirable state.
The principle that enables Ansible to be declarative and yet reliable is idempotence, a concept borrowed from mathematics. An idempotent operation is one that can be applied multiple times without changing the result beyond the initial application, such as multiplication by zero. Ansible modules are idempotent. For example, if one of the tasks is to create a directory on the server, then the directory will be created if and only if it does not already exist. This sounds pretty simple to implement in the case of filesystem operations, but it’s not as trivial when considering more complex modules. Nonetheless, Ansible guarantees idempotence in all of its core modules.
3. Reusable
One of the requirements we’ve set for our deployment automation project was that any script written could be used on two occasions: when deploying a new version of an application and when provisioning new servers that should contain that application. One of the ways to achieve that with Ansible is by using tags. Tags allow you to annotate tasks or plays. When running Ansible, users can decide which category of tasks will be executed by specifying which tags should be run or skipped.
For example, when deploying an application on a new server, several infrastructural requirements, such as MongoDB and Node.JS, must be initially installed. Only then can the application software stack be deployed. On the other hand, when updating this application, only the actual code need be deployed, as the infrastructure is already in place. In our playbooks, tasks that should run during server provisioning are tagged ‘infra’ and are skipped during application deployments.
4. Extensible
Another requirement we’ve set was to be able to track and review any application deployments that have run in the past or are currently running. At BigPanda we strongly believe in the ability to trace and review infrastructure or code changes in production as a major tool for troubleshooting issues.
Correlating alerts from our monitoring systems with application deployments has significantly reduced our MTTR. This is one of the main reasons we have an API for application deployments in our product. The question then was not which tool to use to track deployments, but how to make Ansible send notifications about deployments to BigPanda’s API in the easiest way possible.
Fortunately, Ansible can be easily extended by implementing custom modules. A module is simply a piece of code that reads JSON from stdin, does whatever you want it to do and writes the result to stdout, again, as JSON. It’s as simple as that. Moreover, if Python is your language of choice (Ansible is polyglot when it comes to modules), then you can use Ansible’s helper functions from most common tasks and concentrate only on your module’s core logic.
Writing the first release of the BigPanda module took us less than a day. We already use it in our deployment playbooks, and we hope that it will be soon incorporated into Ansible’s core modules (vote for us!).
5. Community
An important consideration for using an open source tool is its community. At first, we were concerned by Ansible’s size of community and overall maturity. Thankfully, we soon discovered that Ansible is a vibrant and reliable open source project, with more than 700 contributors and 10,000 commits.
Recently, AnsibleWorks, the company behind Ansible, launched Ansible Galaxy, a repository with ready-made roles (I won’t go into details regarding roles, but it’s yet another powerful feature) that are shared by the community. Anyone can use and review these roles. Ansible Galaxy now contains over 450 roles, shared and used by more than 3,300 members.
Recap
We’re hooked on Ansible; we won’t deny it. But the fact is that by using Ansible, we were able to build a full-blown automation infrastructure for doing anything from ad hoc scripts (heartbleed was patched by an Ansible playbook) to EC2 instance provisioning, within a couple of months.