Last week, I attended my first Puppet NYC meetup, which was hosted at Gilt Groupe. As a fairly recent user of puppet, it was great to meet some folks from the community in NYC that are using it on a daily basis. Here are my notes from the two great presentations in hopes that they're useful for other people.


Presenter: Eric E. Moore (Brandorr Group LLC)

Foreman is an application that runs atop puppet to let you interact with and see reports of your puppet nodes.  In addition, Foreman can act as a provisioning system for imaging machines with kickstart, signing puppet certificates, and much more.  Foreman scales -- foreman has been known to power a 4,000 node operation.

The main page of the Foreman UI is a list of all of your hosts, and it links off to a number of reports.  There is a dashboard/overview page which displays a pie chart of your Active/Error/Out of Sync instances as well as the run-distribution (and timings) of previous runs.

Foreman imports information about your hosts from the puppet storeconfigs, and there's detailed output from the last puppet run (on a per-node basis).  Foreman also gives you a UI to access to the lists of facts per server, and you can do things like search for all nodes matching a particular fact (i.e. see a list of all machines with processor XYZ).  This data is available via API, as well (more later).

Foreman lets you control which environment a node is in (if you're using environments on the puppet server), but it also lets you set variables that are sent to the puppet client on run (somewhat like facter).  You can set these variables on four-levels: global, domain-level, hostgroups, and host-level.  Internally, foreman lets you group machines by domain or into host-groups for setting variables like these (the talk at the puppet meetup was that this is an alternative to extlookup).

Foreman is designed for total-provisioning.  It supports provisioning via/configuring (among others):

  • Kickstart, Jumpstart, and Preseed
  • PuppetCA
  • TFTP
  • DNS
  • DHCP
  • Virtual machines (VMWare)
  • Eric and Brian mentioned that they are planning on contributing ec2 provisioning to the project.

Foreman has full role-based access controls, meaning you can give you users access to particular views, reports, operations or subsets of nods.  In addition, it provides an audit log of what has changed (including graphs of the number of changes, failures etc).  It provides a mechanism to initiate puppet runs from the dashboard, and also has a "destroy" button to clean out the storeconfigs for a particular node.

An interesting feature of Foreman is the REST API, which follows full REST / HTTP semantics for CRUD operations.  Eric mentioned using the API for provisioning nodes as well as for running searches over the nodes in the system.  It was mentioned that authentication for the REST API was less than ideal -- suggestion was to use some sort of proxy in front of Foreman.

Foreman vs. Puppet dashboard: The conversation seemed to suggest that Foreman's features were a super-set of those of Puppet Dashboard, with a few exceptions.  For example, Puppet Dashboard has support for viewing diffs of your configurations.

Change Management with Puppet

Presenter: Garrett Honeycutt (Puppet Labs) Slides

Ideally, you want all the environments to be exactly the same: Dev == QA == Staging == PROD so that you can catch issues early.  With that said, there's typically some sort of approval criteria to move changes from one environment to the other (code review, QA procedures, etc).  Given all of these different environments, each environment often has different teams and sometimes there are conflicting goals.  For example, dev wants to do quick features whereas ops wants production to be stable.

It's important to document the different environments you have and what the policies are.  For example, who owns what, what the order of precedence is, what the SLAs are per environment, etc.  Garret has seen a flow like the following work well when doing puppet development: Puppet Test Area -> Dev -> QA -> Prod .  In addition, it's important to document the gating factors between environments -- who can approve migrations between environments and how are they approved.

Suggested SVN/git layout looks like this:


Breaking it down:

  • Branches are short-lived and topical ("feature" branches). For example, branches/123 is a branch to work on ticket 123. All work should be done on a branch and then merged to trunk after review.
  • Tags are immutable, and Garret has found that BIND style timestamps work the best. 2011041300 would be the first tag for April 13th, 2011 (the last two digits are an incrementing counter).
  • Trunk contains all of the best known working code, but it is not very-well tested.

Development flow:

Consider these environments: Puppet Test Area -> Dev -> QA -> Prod

  1. When a ticket/change request comes in, create a branch off of trunk. Eventually, the change will get merged back to trunk.
  2. There should be an environment that is always running off of trunk so that you can verify that it works well-enough to create a tag. Tags should always be made off of trunk.
  3. Once a tag has been created, that tag should be deployed to each environment in turn.
  4. Testing is done per environment, and if any verifications fail (for example, there's an issue in QA and it's not a candidate for production), then create a new ticket to fix the issue and go back to #1.

Important: Avoid taking shortcuts, they will become more and more expensive.

Tag generation can be automated, and the selection of tags for environments can be automated as well (svn switch module).

Branch development:

It needs to be easy to have test instances using branches. Need to have it easy for spinning up instances, and you can either use puppet apply or have a puppet master that knows about each branch and has a separate environment for it. Person reviewing code can then spin up an instance to verify the module.

Release manager:

Garret has very successfully used the role of "release manager" in the past to facilitate branch-management. The RM is responsible for merging all branches back to trunk once they are stable. This person should also be responsible for monitoring commits to a branch so that they can offer constructive criticism early in the process (possibly pair-programming), particularly if there are people new to using puppet. The RM position can be a rolling person (e.g. he's had it switch weekly in the past).

Multiple teams exchanging code:

  • Use multiple module paths.
  • Communication is super-important. Advocated using svn commit emails so that teams can see what other teams are doing.
  • Private github accounts are supposedly very useful, too.

Test driven development:

  • Use puppet apply to do manifest testing.
  • Recommends having a ${module}/test directory to put test files. The test file should show how to use the module you're developing, so that it can be tested with puppet apply. The test should be written before the module. NOTE, this is not a unit test. You're not verifying what puppet has done, only that it is exiting cleanly.
  • End-to-end testing should be done via monitoring. Machines add themselves to nagios and you can verify that services get started correctly that way. Suggestion is that all environments have monitoring for this end-end testing.

We briefly discussed a use-case of a company that runs hundreds of machines. This organization runs puppet via cron rather than as the puppet agent daemon in order to disable updates during busy hours. During those hours, puppet runs in noop mode so that reports are generated for Puppet Dashboard. When they are ready to roll-out changes during the change-window, they run puppet with noop to test what would happen and give a green light before actually applying the changes.


The April puppet meetup was great, and I definitely plan on attending future meetups to learn what people are doing.  I'd like to thank everyone that made the meetup possible and a great success!