Functional testing of Puppet modules and Docker

In this article I will describe our way of testing Puppet modules and how features of Docker (and lxc containers) are used in our testing framework.

In Spil Games we were early adopters of Puppet. In 2013 decision were made to update our Puppet infrastructure to version 3.* Of course we decided to follow all the best practices and do it agile 🙂 . While the official documentation provides a more or less clear overview of basic components (modules, hiera, node classification),  we found there is no optimal (ready to use) way of testing the functionality of the modules. That’s why we came up with own testing solution based on lxc containers. This solution in connection with Gerrit and Jenkins gives us a very solid and fast framework for testing module functionality.

How we use Puppet

First I’d like to give a general overview of our Puppet infrastructure.

We run Puppet in different ways:
master-agent setup: production environment
local puppet apply: customize (“bake”) Openstack images, initial configuration of puppetmaster

Our Puppet modules are used to configure hosts running on different platforms: real hardware, OpenStack, vagrant.

How our Puppet is configured

Every host can be described in Puppet with 3 mandatory facter variables:

  • ‘role’: only single role can be assigned on host. Example of the roles: ‘gerrit’, ‘web_frontend’, ‘hadoop_namenode’
  • ‘platform’: supported platforms: ‘lxc’, ‘openstack’, ‘physical’, ‘virtualbox’
  • ‘spil_environment’: environments: ‘production’, ‘test’, ‘puppet_test’

Facts ‘role‘ and ‘platform‘ are used in site.pp to resolve and include corresponding Puppet role class.
Modules are included in the site.pp node definition as follows:

For example role ‘hadoop_namemode‘ on ‘openstack‘ platform will be mapped as the Puppet class:
roles::openstack::hadoop_namenode

Here you can see how working version of site.pp looks like: https://github.com/lruslan/puppet_test/blob/master/manifests/site.pp

Role classes include all necessary Puppet modules.
All role classes include the ‘base‘ class. This class provides a baseline configuration and should be applied everywhere.
For example, the ‘base‘ class includes low level modules which provide setup and configuration of syslog_ng, ssh, sysctl, security etc.

We use hiera:

How we verify our modules

From the very beginning we decided to use Gerrit as a code review system. Gerrit comes with lots of useful features which make our life simpler and deserves another blogpost.

Submitted code passes basic verification:
1) Puppet-lint verification
2) Syntax validation:

  • ERB templates

Use erb to import template file and detect syntax errors:

  • YAML files

Use ruby ‘yaml’ library to import yaml files and detect syntax errors :

  • Puppet manifests *. pp:

What about testing

At the moment I know four ways of testing modules:

  1. smoke tests (functional testing) http://docs.puppetlabs.com/guides/tests_smoke.html
  2. rspec-puppet – test behavior of modules using compiled catalog (non functional testing) http://rspec-puppet.com/tutorial/
  3. apply module in a “–noop” mode to detect compilation errors
  4. test by applying changes on a live(production) environment

How we test Puppet modules

We test changes in production. Not always. For the critical changes we have an ability to test using custom Puppet environments without merging changed code to the master. http://docs.puppetlabs.com/guides/environment.html

With the help scripts, a testing branch is cloned to the individual Puppet environment across puppet masters. Then a user can login to the production node and run puppet:

It’s up to the submitter to verify results and publish review request to Gerrit.

But it’s not enough!

We have cross dependencies between the modules: some (low level) modules are included in more complex (high level) modules.
Testing of the only modified modules can not detect possible regression bugs which may affect dependent modules.
For example ‘nginx‘ and ‘mysql‘ modules are used in ‘gerrit‘ module. Major changes in behavior or parameters of ‘nginx‘ module may also affect ‘gerrit‘.
To catch such bugs we could track module dependencies and run tests only on dependent modules or just run tests on all puppet modules.

Rspec-puppet is a very useful tool to test consistency of the modules. But this type of tests can not cover issues which may appear during the normal run of puppet, as they analyse only compiled catalog without performing changes.
For example, we have a ‘nginx‘ module which contains ‘exec‘ resource to run some tricky bash/python script. You can use rspec-puppet test which will verify if compiled catalog contains resource ‘exec my_secret_bash_nginx_script‘ but it will never test if the execution of this script is successful or failing.
Same goes with ‘package’ resources – sometimes RPM packages contain lot of bash magic and assumptions.

Smoke tests look like the most reliable way to test functionality of the Puppet modules.

Smoke tests require an extra ‘tests’ directory created inside the module. A test for a class is just a manifest that declares the class. To use them, the user has to run ‘puppet apply’ pointing manifest file with the tests. Once applied to the system they test most of the module functionality.

With smoke tests we had to address a few things:

  • Test manifests are located outside module “manifests” directory. They cannot be loaded from site.pp in the same way the usual classes are loaded. To include such tests, we should run puppet differently than we would normally run it.
  • To avoid collisions between modules, tests have to be run in an isolated environment. It’s especially important for complex modules (modules which include other modules).

Our way of functional testing

Using the idea of smoke tests we introduced something very similar.
In our configuration test mode can be enabled by setting facter variable spil_environment=’puppet_test’. Also facter variable ‘module‘ has to be set to specify which module should be tested.

These variables are used in the node definition in site.pp. In the ‘test mode’ node includes two classes:

  • “roles::${::platform}::base” – collection of resources we create on all systems
  • “${::module}::test” – contain all necessary resources and parameters to test module functionality

Class ‘::modulename::test’ should be published inside the module manifests directory ‘modulename/manifests/tests.pp’.

Currently to test the module we run “puppet apply”, with the module name specified and the parameter “–detailed-exitcodes”. After puppet is run we analyse exit code to detect errors and generate reports.
Later we plan to extend tests with http://serverspec.org/ to add checks which analyse resulting state of the system.

How we run our tests and why Docker is so useful

To be useful test environment should meet the following criteria:

  1. Modules should not conflict – tested inside isolated virtual environments
  2. Ideally each module has to be tested on clean(freshly provisioned) host
  3. Time to run tests should be as minimal as possible: < 5-7 min
  4. Everything should be automated

We’ve tried to use full virtualisation (Virtualbox) which has proven to be resource inefficient and slow.

LXC containers together with Docker provide light and efficient solution for virtualisation.
A virtual environment in LXC terms is called a ‘container’

Normally, starting a container takes few seconds (2-5 seconds).

Using Docker we can quickly launch LXC containers (container per Puppet module), apply the tests in virtual environment, analyse results and make a decision.

Docker provides number of useful features which are used in our testing framework:

  • take snapshots of the running containers: such snapshots can be used to start new containers
  • python library which can be used to manage containers: start, stop, snapshot, remove etc
  • containers can bind volumes using external directories on the host system. This way we share local clone of our Puppet repository between the containers.

We can spawn a number of containers in parallel, which obviously will speed up testing.

In our setup all server roles share base class (baseline configuration).
All heavy lifting is done during base configuration – most of the time is spent waiting on installation of system packages.
High level Puppet modules are usually not heavy – normally they install few packages, modify configuration files and restart services.

Having the ability to snapshot running containers, we can apply base configuration (apply base Puppet class) and then take a snapshot of the configured system.
We can use such a base snapshot later to spawn containers and use them to test the rest of the modules.
Such optimisation speeds up testing and even save on disk operations.

Detailed Workflow

1) clone Puppet repository

2) import initial docker image: ‘spil/slc:6.5’ :
in our case a minimal installation of scientific linux

3) create custom image: ‘spil/slc-puppet:6.5’ :
Use Dockerfile to build image with SSHD service configured, RSA keys published and puppet installed.
Container started with this image allows login as a root and run puppet.

4) start container ‘puppet-base’ and apply puppet ‘base’ role:use image: ‘spil/slc-puppet:6.5’
bind volume: /root/puppet (source is external checkout of puppet repository created on step #1)
wait until container started (IP address assigned, SSHD started)

5) snapshot ‘puppet-base’ container to create image with the name ‘spil/slc-puppet-base:6.5’
This image has the base setup we apply on all servers.
To test specific Puppet modules we can apply them on top of the base image. This will eliminate installation of the base components for every single test environment.
Base configuration does not often change, so this image can be reused between runs which significantly decrease time to run the tests.

6) test modules:
for every Puppet module launch container with the name ‘puppet_$modulename’ using base image ‘spil/slc-puppet-base:6.5’ and run tests

Automation

All described steps can be easily scripted and https://github.com/dotcloud/docker-py library provides easy way to manage docker containers.
We use a Jenkins job to run an orchestration script which provides all testing.

You can see an almost identical copy of the script here: https://github.com/lruslan/puppet_test/blob/master/docker/puppet_test.py

This orchestration script has the following features:

  • ‘full’ mode: detect puppet modules, build base docker image using base module and run tests for the rest of the modules
  • ‘quick’ mode: reuse previously created base image and run tests for the modules
  • parallel mode: it’s possible to specify the number of workers (so multiple tests run in a parallel)
  • Jenkins integration: detect which Puppet modules have been changed since last jenkins build
  • results processing: publish html report with the results and ability to see details (stdout/stderr) of every test
  • ability to set timer and stop containers if the test takes longer than expected

To publish status of tests we generate html and use jenkins “HTML Publisher Plugin”

Result can be seen on build status page:
puppet_test

With mentioned features (containers, ability to run tests in parallel and tricks with docker snapshots ) the full cycle of testing on ALL modules takes only 7-10 minutes.

Do you want to try on your own?

To give a more detailed overview I’ve created a Github project with a stripped and light version of isolated Puppet environment containing all necessary scripts, and also published very simple modules which can be used for the demo:

https://github.com/lruslan/puppet_test

The project also contains scripts I use for setup of our docker environment.

More details can be found in project README

Please give your feedback If you find this topic useful. Or let me know If you want to hear more about how we use Puppet (encrypt data in hiera or special features of Gerrit make it such an awesome tool)

 

Ruslan Lutsenko

Infrastructure Engineer at SpilGames

Comments

  1. […] In this article I will describe our way of testing Puppet modules and how features of Docker (and lxc containers) are used in our testing framework.  […]