Ansible – digging into details

Ansible – digging into details

There’s a ton of craze about Ansible. It has been the fastest growing CM tool for the last couple of years and it’s not going away anytime soon. There are lots of things to like and a few things not to like about it. This will be a bit of a continuation of this post.

Ansible

Why the huge growth in Ansible

There’s been a ton of growth because the barrier to entry is the lowest. I’m not talking down about it by saying this, it’s just a simple fact. Ansible is by far the easiest configuration management tool to start using. There’s no agent to install, no master to setup, only SSH and python are needed. Not only that, but writing playbooks is also probably the simplest way to configure servers. You can get up and going very easily not just in a “hello world” type of way, but I mean in a productive work getting done quickly type of way. It’s also terrific for smaller environments, it’s fast enough and the codebase won’t get too carried away if you’re only managing 10s to the low 100s of servers.

Where I use it

I’m a huge fan of using Ansible when I can run it programmatically. I run it either by utilizing the APIs in python or by setting it up in a build server (Jenkins). I won’t use anything else to provision servers; it’s the first provisioner I run to when I’m using Packer. Make sure to create logs or other artifacts around what Ansible is doing in these instances. Doing this will help with any security audits in your future.

Ansible Packer Provisioner

I like to set it up within projects I’m working on. What I mean by that is that I don’t really keep a central Ansible repository for everything somewhere but I keep playbook/role code either in its own code base per project, or within the project itself.

I’m sure you’re thinking this leads to a ton of code duplication but I’ve found that it hasn’t. I use as much as I can off of Ansible-Galaxy and I still do keep a central repository for code that would get repeatable. Keep in mind, these are mostly deployment/provisioning scripts. I make sure they’re idempotent, but I don’t plan on running the playbooks on the hosts very often to keep state.

Why do I still use other tools?

Keeping a central code base and having Sysops/Operations/Devops run those playbooks all the time just feels clunky. I like to have a system that keeps state on a host for any persistent infrastructure I may have. As you can see, I use it extensively, I just don’t use it from the cli. It’s way easier to audit and account for Jenkins jobs and programmatically running it since you can build audit logs into whatever way you run it. Sure you can audit your users with auditd and syslog, but I’ve found the paperwork is typically easier when you have automated processes doing stuff for you.

It also doesn’t scale well. I have no problem using it to create images because it’s decently quick when it’s just running on 1 single server, but running across a ton is a problem. Running tasks serially with SSH does not scale that well, so Ansible doesn’t scale well either.

The codebase also gets a bit crazy when it gets too large. I’ve seen some awful messes in people’s code because they try to do everything in one repo. That’s why I make simple playbooks and only use them for provisioning immutable infrastructure which has no CM tool or SSH access.

The fact is, I still have to have persistent systems in some places. For those I prefer Salt. Salt scales really well and I prefer the python base compared to the ruby base of Puppet/Chef. On persistent systems you need to lockdown SSH. If you don’t, there will always be someone that will login, change something, and not put it into your codebase/version control. Having an agent is a good idea if you’re going to be going through an audit(soc2, fedramp, etc) as well, exactly for these reasons.

Final thoughts

I use Ansible exclusively in some environments and for some projects, I just don’t like to use it on persistent systems where controls are relaxed on systems/operations teams. If sufficient controls are in place to ensure no Operations/Systems teams can SSH into anything using only Ansible may not be a problem. If you only use it as a provisioner and/or from Jenkins or a build server it’s very easy to use nothing but Ansible.

Comments are closed.