Way back at the beginning of Part II, you set up testing and production environments. Now that you’ve learned more about Puppet, Puppet servers, and deployment methodologies, it’s time to talk about how to use environments effectively.
Environments provide isolation for modules and data. They provide useful ways to organize and separate modules and configuration data for different requirements. There are numerous reasons that people divide systems into different environments, but here are just a few:
In this chapter, we’ll discuss how to use environments for all of these purposes.
Even if you don’t think you need separate environments, I encourage you to read through this chapter so that you can consider what environments have to offer you. Even the smallest Puppet sites use environments, and there are good reasons for that.
An environment is not a namespace—it’s a hard wall where a module or data either exists in the environment or not. Two Puppet nodes requesting data from two environments might as well be talking to two different Puppet servers. You can have the same data or the same modules in multiple environments, but neither is shared by default.
To make it really clear:
modulepath
of an environment doesn’t exist for that environment. This means that two modules with the same name can exist without collision in two different environments.This isolation enables many ways of working across teams, business units, and diverse requirements. We’ll show you how to utilize all of these features here.
Unlike previous versions of Puppet, environments are enabled “out of the box” with Puppet 4. No configuration changes are necessary.
Puppet 4 expects to find a directory for each environment in the location specified by the $environmentpath
configuration setting, which defaults to /etc/puppetlabs/code/environments. Each directory within this path is an environment. Within each environment should exist the following:
The default environment for Puppet nodes is production. With a default configuration file, Puppet 4 expects your modules to be installed at /etc/puppetlabs/code/environments/production/modules. The file that configures data lookup for the production environment should be at /etc/puppetlabs/code/environments/production/hiera.yaml.
There is one additional place that Puppet will look for modules: the directories specified by $basemodulepath
. This defaults to /etc/puppetlabs/code/modules. Any modules you place inside this directory will be shared among all modules.
By default, a client informs the Puppet server which environment to use based on the configuration file setting, command-line input, or the default of production. This makes it easy for you to test module or data changes using a test environment by running puppet agent
with a different environment on the command line.
A node terminus or ENC can assign a different environment to the client than the one it requested.
For this reason, there are no per-environment settings for ENCs, as the ENC must be queried before the environment is chosen.
In this section, we’ll cover ways to configure code and data at the environment level. Puppet 4.9 and higher provide more flexibility, consistency, and speed than any previous version of Puppet.
Within each environment directory, create an environment.conf file. This has the same configuration format as other Puppet configuration files, but does not support sections. The following settings are allowed in this file:
Environment configuration files are parsed after the main configuration, so Puppet configuration options can be used within the environment configuration. The following settings are allowed in this file:
config_version
manifest
modulepath
$basemodulepath
. The module path will be searched until the first location is found that contains a directory with that module name. Modules that exist in multiple directories are never merged.environment_timeout
Environment configuration files are parsed after the main configuration, so Puppet configuration options can be interpolated within the environment configuration.
Here is an example environment configuration file. This file shows the default values, as if the configuration file did not exist. Any relative paths within the environment configuration file are interpreted relative to the environment directory:
# Defaults to $environmentpath/$environment/manifests
manifest
=
manifests
# Defaults to $environmentpath/$environment/modules:$basemodulepath
modulepath
=
modules
:
$basemodulepath
# Cache until API call to clear cache is received
environment_timeout
=
unlimited
# A script to supply a configuration version
#config_version = (a script)
You can review a specific environment’s settings using the following command:
$ puppet config print --environment environment
It is entirely possible (albeit unintuitive) for you to use a different environment path for puppet apply
versus puppet agent
when using a server. You do so by changing environmentpath
in the [agent]
section of the Puppet configuration file. You can check the agent’s settings like so:
$ puppet config print --section agent --environment environment
The manifest path can point to a specific file or a directory. If the target is a directory, every manifest .pp file in the directory will be read in alphabetical order. It can also point to a specific file, in which case only that file will be read. It defaults to the manifests/ directory within the environment directory.
In older versions of Puppet, it was common or necessary to assign classes to nodes in a manifests/site.pp file. While still supported, it is no longer a required practice. Assign classes to nodes using Hiera data or with a node terminus (ENC).
All code should reside in modules. You should assign classes using Hiera or ENC data. Independent manifests should contain only resource defaults for the environment.
The global configuration setting disable_per_environment_manifest
overrides the environment setting for manifest
. If it’s set to true
, then the target of global configuration option default_manifest
is used regardless of the client environment. This can be used to centralize resource defaults across all environments.
It is possible to disable Puppet Server’s caching of the environment data for development and testing. Changes will be visible immediately for successive catalog compilations, but has a negative performance impact that makes it unsuitable for high-performance environments. An inconsistent catalog may be returned if new code is deployed at the same time as a catalog is built.
Set the environment timeout to unlimited
and use a trigger to refresh the Puppet environment cache when code is deployed.
Use a value of 0
(no caching) for environments undergoing active development with a single client, such as a temporary bugfix or dynamic test environment. Stable environments should use a value of unlimited
.
0
will lead to inconsistent cache states between different server threads. This will cause agent callbacks(file
resources, etc) to fail in confusing ways. More details in “Invalidating the Environment Cache”.Puppet 4.3 introduced a tiered, pluggable approach to data access called Puppet Lookup. As discussed in “Accepting Input”, parameter values are automatically sourced from the following locations in order:
$confdir
/hiera.yaml$environmentpath
/$environment
/hiera.yamlEach of these files can specify their own unique hierarchy, merge, and lookup options. Each of them can select one or more sources of data, including custom backends. In this section you’ll find detailed configuration guidance for environment-specific data sources.
In order versions of Puppet there was only a single Hiera configuration file, In order to use environment-specific data sources, it was necessary to interpolate the $environment
variable into the data path, like so:
# old Hiera v3 format
:yaml
:
:datadir
:
/
etc
/
puppetlabs
/
code
/
environments
/
%{
::environment
}
/
hieradata
:json
:
:datadir
:
/
etc
/
puppetlabs
/
code
/
environments
/
%{
::environment
}
/
hieradata
A frequently asked question is how to define a separate Hiera hierarchy per environment. This wasn’t possible at the time. From Puppet 4.4 onward, it is possible to supply a hiera.yaml file in each environment, allowing a custom Hiera hierarchy for each environment.
${::environment}
in the global Hiera hierarchy.The global Hiera configuration should not source data from an environment directory. Search for and remove any environment directory paths in the global Hiera configuration file, to avoid redundant lookups in the same data sources. This is a simple hierarchy appropriate for the global Hiera configuration:
hierarchy
:
-
name
:
"Global Data"
datadir
:
/etc/
puppetlabs
/
code
/
hieradata
path
:
"common.yaml"
data_hash
:
yaml_data
As mentioned in “Configuring Hiera”, data from the environment is queried after the global data. Environment-specific data sources can be especially useful when environments are used by distinct teams with their own data management practices.
Each environment contains its own hiera.yaml file, which allows each environment to have a unique lookup hierarchy. Puppet 4.3 through 4.8 used a Hiera v4 configuration file in the environment. Puppet 4.9 and higher uses the Hiera version 5 file format for all data sources: global, environment, and module. This file format was described in “Configuring Hiera”.
Create a hiera.yaml file within the environment. The following example shows a minimal use case:
---
version
:
5
defaults
:
# for any hierarchy level without these keys
datadir
:
data
# directory name inside the environment
data_hash
:
yaml_data
hierarchy
:
-
name
:
"Hostname YAML"
path
:
"hostname/%{trusted.hostname}.yaml"
-
name
:
"Common Environment Data"
path
:
"common.yaml"
Create the datadir
directory data/ within the environment, and populate environment-specific Hiera data files there.
As discussed in “Configuring Hiera”, it is possible to define a custom backend for Hiera for use in the environment data hierarchy. This is done in exactly the same way shown in “Backend configuration”.
data_binding_terminus
setting to add a global data source for lookup. This was deprecated in Puppet 4 and removed in Puppet 5, in favor of the more flexible Hiera hierarchies.You can configure Hiera to query a Puppet function at any level in the lookup hierarchy. Specify the backend with lookup_key
for the key, and the function name as the value.
-
name
:
'Environment-specific user account'
lookup_key
:
environment
:
:my_local_user
Puppet 4.9 introduced the new Hiera v5 hierarchy and a new custom backend architecture. This is evolving quickly. Track the latest changes at “Hiera: How custom backends work”.
Create a function in the environment::
namespace, as described in “Defining Functions”, except that all paths are relative to the environment directory instead of the module directory.
Replace this simple Ruby function example with your own. The local_user()
function must return a hash with keys containing the entire class name, exactly the same as how keys are written in Hiera.
Puppet
:
:Functions
.
create_function
(
:'environment::local_user'
)
do
def
local_user
(
)
# the following is just example code to be replaced
# Return a hash with parameter name to value mapping for the user class
return
{
'user::id'
=
>
'Default for parameter $id'
,
'user::name'
=
>
'Default for parameter $name'
,
}
end
end
No changes to the Puppet manifests are required to enable environment Hiera data. Once you have created the hiera.yaml file, lookups not satisfied by the global data source will automatically query the environment data. Removing the Hiera configuration file will disable lookups of environment data.
In this section, we’re going to document some well-known patterns for development, and how to structure the environments to support those patterns.
The most common development strategy is to develop changes to a module on a feature branch. Merge that module’s changes to master for testing. Then tag a new branch or merge to an existing release branch to push the changes into production.
It is easy to support this development workflow with a fixed set of environments. The environments would be structured and used as shown in Table 29-1.
Environment | Branch name | Description |
---|---|---|
dev |
dev | Develop and test changes here. |
test |
master | Merge to master and test on production-like nodes. |
production |
master | Push master to production nodes and/or Puppet servers. |
While the branch names and the steps involved differ slightly from team to team, this is a well-known strategy that many development teams use, and every code management tool is capable of implementing. In a small environment, or with a small DevOps team, this might be sufficient.
A small variation, outlined in Table 29-2, can support many more engineers, including feature and hotfix branches working on the same module at the same time. A new environment is created for each feature branch. After testing is complete, the change is merged to test for testing against a production-like environment. The feature branch environment is destroyed when the change is merged to master and pushed to production.
Environment | Branch name | Description |
---|---|---|
feature_name |
feature_name | A new branch for a single feature or bugfix |
test |
test | Merge to test for review, Q/A, automated tests, etc. |
production |
master | Merge to master and push to production. Destroy feature branch. |
You can utilize any method that supports your workflow. Tables 29-1 and 29-2 show two basic and common structures.
Whichever strategy you use, with a small set of environments that track a single common master
branch, you can use Puppet with “out of the box” defaults.
It is very common to have a test
environment where changes to modules are tried out. That’s exactly what we created in Chapter 10. This might serve your needs for testing new modules if you’re part of a small team.
It is often necessary to test changes in isolation, avoiding other modules in a different stage of development, when you’re fixing a production problem. In this scenario, it’s a good practice to use a one-off test environment.
Simply go into the $codedir/environments/ directory and create a new environment named for the problem. I tend to use the ticket number of the problem I’m working on, but any valid environment name, such as your login name or even annoying_bug/, will work.
There are two strategies for doing this. One is to check out everything from the production environment into this environment’s diretory. Another is to set modulepath
to include the production environment’s modules directory ($environmentpath/production/modules/), and then check out only the problematic module into this environment’s modules/ directory. Here is an environment.conf file that implements that strategy:
environment_timeout
=
0
manifest
=
$environmentpath
/
production
/
manifests
modulepath
=
.
/
modules
:
$environmentpath
/
production
/
modules
:
$basemodulepath
Check out only the modules related to this bug in the testing environment’s module directory. When looking for modules, the server will find the module you are working on first. All other modules will be found in the production module directory. This allows you to work on a quick fix without pulling in dozens or hundreds of unrelated modules.
When the problem is solved and merged back to the production code base, rm -r
the one-off environment and go grab a cup of your favorite poison.
Environments allow diverse teams to segregate their code and data according to their own standards. This allows for the freedom of self-direction, while also letting both teams share common code and organization data.
To give each team freedom with its own modules and data, start with the following configuration and adjust as necessary for your teams:
basemodulepath
, usually /etc/puppetlabs/code/modules/.team
/modules directory.hiera
as the environment data provider in /etc/puppetlabs/code/environments/<team>/environment.conf.The per-environment Hiera hierarchy should contain only data specific to this team’s nodes. There’s no performance impact because even if Puppet doesn’t cache the environment data, this data is used for every catalog build so these files always reside in the filesystem cache.
Environments provide a useful separation that allows independent development of new features and bug fixes. However, it does enable a diverse structure of many modules in multiple states of development. Tracking dependencies would be a nightmare without a tool to manage this. r10k provides a way to track and manage dependencies that snaps right into a branch-oriented workflow.
Create a cache directory for r10k and make it writable by the vagrant
user before installing the r10k gem:
sudo
mkdir
/var/cache/r10k
sudo
chown
vagrant
/var/cache/r10k
$
sudo
gem
install
r10k
Fetching:
r10k-2.2.0.gem
(
100%
)
Successfully
installed
r10k-2.2.0
Parsing
documentation
for
r10k-2.2.0
Installing
ri
documentation
for
r10k-2.2.0
1
gem
installed
Within each environment directory, create a file named Puppetfile. Following is an example that should be pretty easy to understand:
# Puppet forge
forge
"http://forge.puppetlabs.com"
moduledir
=
'modules'
# relative path to environment (default)
# Puppet Forge modules
mod
"puppetlabs/inifile"
,
"1.4.1"
# get a specific version
mod
"puppetlabs/stdlib"
# get latest, don't update thereafter
mod
"jorhett/mcollective"
,
:latest
# update to latest version every time
# track master from GitHub
mod
'puppet-systemstd'
,
:git
=>
'https://github.com/jorhett/puppet-systemstd.git'
# Get a specific release from GitHub
mod
'puppetlabs-strings'
,
:git
=>
'https://github.com/puppetlabs/puppetlabs-strings.git'
,
#:branch => 'yard-dev' # an alternate branch
# Define which version to install using one of the following
:ref
=>
'0.2.0'
# a specific version
#:tag => '0.1.1' # or specific tag
#:commit => '346832a5f88a0ec43d' # or specific commit
As you can see, this format lets you control the source and the version on a module-by-module basis. Development environments might track the master
branch to get the latest module updates, whereas production environments can lock in to release versions. This allows each environment individual levels of specification and control.
You can validate the Puppetfile syntax using the puppetfile check
command. This only works when you are in the environment directory containing the Puppetfile and modules/ subdirectory:
[
vagrant@client
~
]
$
cd
/etc/puppetlabs/code/environments/test
[
vagrant@client
test
]
$
r10k
puppetfile
check
Syntax
OK
r10k can remove any module not specified in the Puppetfile. This can be especially useful after a major update has been merged, where some modules are no longer necessary:
[
vagrant@client
~
]
$
cd
/etc/puppetlabs/code/environments/test
[
vagrant@client
test
]
$
r10k
puppetfile
purge
-v
INFO
->
Removing
unmanaged
path
/etc/puppetlabs/code/environments/test/modules/now-obsolete
purge
command will silently wipe out any module that you forgot to add to the Puppetfile. Always use -v
or --verbose
when performing a purge.To prevent r10k from wiping out a module you are building, you can add an entry to the Puppetfile that tells r10k to ignore it:
# still hacking on this
mod
'my-test'
,
:local
=>
true
Create a new repository to contain a map of environments in use. This is called a control repository.
Add the following files from your production environment:
If you want to import from your existing setup, the process could look something like this:
[vagrant@client ~]$ cd /etc/puppetlabs/code/environments/production [vagrant@client production]$ git init [vagrant@client production]$ git checkout -b production Switched to a new branch 'production' [vagrant@client production]$ git add environment.conf [vagrant@client production]$ git add Puppetfile [vagrant@client production]$ git add manifests [vagrant@client production]$ git commit -m "production environment" [production (root-commit) 7204191] production environment 3 files changed, 71 insertions(+) create mode 100644 environment.conf [vagrant@client production]$ git remote add origin https://github.com/example/control-repo.git [vagrant@client production]$ git push --set-upstream origin production Counting objects: 3, done. Compressing objects: 100% (2/2), done. Writing objects: 100% (3/3), 657 bytes | 0 bytes/s, done. Total 3 (delta 0), reused 0 (delta 0) To https://github.com/jorhett/control-repo-test.git * [new branch] production -> production Branch production set up to track remote branch production from origin.
If you want to start from an existing example, clone the Puppet Labs control repository template.
You’ll need to create an r10k configuration file at /etc/puppetlabs/r10k/r10.yaml. You can install one from the learning repository we’ve been using with this command:
$
mkdir
/etc/puppetlabs/r10k
$
cp
/vagrant/etc-puppet/r10k.yaml
/etc/puppetlabs/r10k/
Edit the file to list your control repository as a source. The file should look like this:
# The location to use for storing cached Git repos
cachedir
:
'/var/cache/r10k'
# A list of repositories to create
sources
:
# Clone the Git repository and instantiate an environment per branch
example
:
basedir
:
'/etc/puppetlabs/code/environments'
remote
:
'[email protected]:
jorhett/learning-mcollective
'
prefix
:
false
# An optional command to be run after deployment
#postrun: ['/path/to/command','--opt1','arg1','arg2']
You can list multiple sources for teams with their own control repositories. Enable the prefix
option if the same branch name exists in multiple sources. As almost every control repo has the production branch, this will almost always be necessary:
sources
:
storefront
:
basedir
:
'/etc/puppetlabs/code/environments'
remote
:
'[email protected]:example/storefront-control'
prefix
:
true
warehouse
:
basedir
:
'/etc/puppetlabs/code/environments'
remote
:
'[email protected]:example/warehouse-control'
prefix
:
true
Create a branch of this repository for each environment in the $codedir/environments directory, as shown here:
[
vagrant
@client
~
]
$
cd
/
etc
/
puppetlabs
/
code
/
environments
[
vagrant
@client
environments
]
$
git
clone
production
test
[
vagrant
@client
production
]
$
cd
test
[
vagrant
@client
test
]
$
git
checkout
-
b
test
If the environment directory already exists inside the specified basedir
, the clone
command will fail. You may want to rename the existing environment directory, then move the files to the newly created repository:
[
vagrant
@client
production
]
$
mv
test
test
.
backup
[
vagrant
@client
environments
]
$
git
clone
production
test
[
vagrant
@client
production
]
$
cd
test
[
vagrant
@client
test
]
$
git
checkout
-
b
test
[
vagrant
@client
test
]
$
mv
.
.
/
test
.
backup
/
*
.
/
Make any changes that are specific to the test
environment, then commit the repository back to the control repo with the following commands:
[vagrant@client production]$ git add changed files [vagrant@client production]$ git commit -m "adding test environment" [vagrant@client production]$ git push --set-upstream origin test
Follow this process any time you wish to create a new environment. No changes to the r10k configuration are necessary, as each branch is automatically checked out into an environment path of the same name.
Now that you have configured everything, use the display
command to see what r10k would do for you:
[
vagrant@puppetmaster
environments
]
$
r10k
deploy
display
-v
example
(
/etc/puppetlabs/code/environments
)
-
test
modules:
-
inifile
-
stdlib
-
mcollective
-
systemstd
-
strings
-
production
modules:
-
inifile
-
stdlib
-
mcollective
-
systemstd
-
strings
That looks exactly like what we have configured. Now you can either move to another server that doesn’t have this environment data, or you can move it aside:
[
vagrant@client
~
]
$
cd
/etc/puppetlabs/code
[
vagrant@client
code
]
$
mv
environments
environments.backup
Now redeploy your environments using r10k, as shown here:
[
vagrant@client
~
]
$
r10k
deploy
environment
-p
-v
Deploying
environment
/etc/puppetlabs/code/environments/test
Deploying
module
/etc/puppetlabs/code/environments/test/modules/inifile
Deploying
module
/etc/puppetlabs/code/environments/test/modules/stdlib
Deploying
module
/etc/puppetlabs/code/environments/test/modules/mcollective
Deploying
module
/etc/puppetlabs/code/environments/test/modules/systemstd
Deploying
module
/etc/puppetlabs/code/environments/test/modules/strings
Deploying
environment
/etc/puppetlabs/code/environments/production
Deploying
module
/etc/puppetlabs/code/environments/production/modules/inifile
Deploying
module
/etc/puppetlabs/code/environments/production/modules/stdlib
Deploying
module
/etc/puppetlabs/code/environments/production/modules/mcollective
Deploying
module
/etc/puppetlabs/code/environments/production/modules/systemstd
Deploying
module
/etc/puppetlabs/code/environments/production/modules/strings
It’s that easy to deploy your environments on a Puppet server or on a Vagrant instance for testing purposes. Anyone with access to the control repo can re-create the known state of every environment in a minute. You can use this approach for building new Puppet servers, staging infrastructure, or development setups on your laptop. Seriously, it doesn’t get easier than this.
When a module is added or upgraded, the changes can be tested in one environment and then merged from the development branch to production.
To check out or update the files in an environment (environment.conf, Puppetfile, modules, etc.), use this command:
[
vagrant@puppetmaster
~
]
$
r10k
deploy
environment
test
-v
To update the modules specified by an environment’s Puppetfile, use this command:
[
vagrant@puppetmaster
~
]
$
r10k
deploy
environment
test
--puppetfile
-v
You can update a specific module in all environments, or just one environment:
[
vagrant@puppetmaster
~
]
$
r10k
deploy
module
stdlib
-v
[
vagrant@puppetmaster
~
]
$
r10k
deploy
module
-e
test
mcollective
-v
-v
verbose option. The purge
command will silently remove modules that aren’t listed in the Puppetfile. Always use -v
or --verbose
when performing destructive operations.When you are comfortable using r10k, you may wish to utilize r10k commands within Git post-commit hooks, in Jenkins jobs, or through MCollective to automatically push out changes to environments.
When using environment-specific Hiera data, you’ll want to deploy the Hiera data along with the modules. In that situation, add the hiera.yaml and hieradata/ directory to the control repo. This is a common strategy when the same people edit the code and the data.
When the data in every environment is similar, an alternative strategy is to place all Hiera data within its own repo, and utilize r10k to install the data separately from the code. This is useful when you have different permissions or deployment processes for data changes and code changes.
To enable automatic data checkout, add the hiera Hepo as a source to the r10k configuration file /etc/puppetlabs/r10k/r10.yaml:
sources: control: basedir: '/etc/puppetlabs/code/environments' remote: '[email protected]:example/control' prefix: false hiera: basedir: '/etc/puppetlabs/code/hieradata' remote: '[email protected]:example/hieradata' prefix: false
Then add data_provider = hiera
to environment.conf, or add the environment directory to the global data provider by using the $::environment
variable in /etc/puppetlabs/code/hiera.yaml:
:yaml
:
:datadir
:
/
etc
/
puppetlabs
/
code
/
environments
/
%{
::environment
}
/
hieradata
As mentioned in “Configuring an Environment”, it is best practice to configure stable environments with environment_timeout
set to infinite
. This provides the highest performance for remote clients.
After pushing a change to the environment, you will need to ask the server to invalidate the environment cache. A trigger like this can be easily created in Jenkins, Travis CI, or any other code deployment tool.
Create a client key for this purpose. Do not use a node key, as this key will be assigned special privileges. Create a unique key with a name that makes the purpose obvious:
[jenkins@client ~]$ puppet config set server puppet.example.com [jenkins@client ~]$ puppet agent --certname code-deployment --test --noop Info: Creating a new SSL key for code-deployment Info: csr_attributes file loading from /etc/puppetlabs/puppet/csr_attributes.yaml Info: Creating a new SSL certificate request for code-deployment Info: Certificate Request fingerprint (SHA256): 96:27:9B:FE:EB:48:1B:7B:28:BC:CD:FB:01:C8:37:B8:5B:29:02:59:D7:31:F8:80:8A:53 Exiting; no certificate found and waitforcert is disabled
Authorize the key on the Puppet server with puppet cert sign
if autosign is not enabled. Then rerun with the following command:
[jenkins@client ~]$ puppet config set server puppet.example.com [jenkins@client ~]$ puppet agent --certname code-deployment --no-daemonize --noop
At this point, you have a key and certificate signed by the Puppet CA. Next, add rules to permit this certificate access to delete the environment cache in /etc/puppetlabs/puppetserver/conf.d/auth.conf file. The bolded section shown here should be added prior to the default deny rule:
# Allow code deployment certificate to invalidate the environment cache
{
match-request:
{
path:
"/puppet-admin-api/v1/environment-cache"
type
:
path
method:
delete
}
allow:
[
code-deployment
]
sort-order:
200
name:
"environment-cache"
}
,
{
# Deny everything else. This ACL is not strictly
Restart the Puppet Server service in order to activate the change.
[
vagrant@puppetserver
~
]
$
sudo
systemctl
restart
puppetserver
At this point, you should be able to invalidate the cache by supplying the code deployment key and certificate with your request. Here’s an example using the curl
command:
[jenkins@client ~]$ ln -s .puppetlabs/etc/puppet/ssl ssl [jenkins@client ~]$ curl -i --cert ssl/certs/code-deployment.pem --key ss/private_keys/code-deployment.pem --cacert ssl/certs/ca.pem -X DELETE https://puppet.example.com:8140/puppet-admin-api/v1/environment-cache HTTP/1.1 204 No Content
Build this request into your code deployment mechanism, and you’ll get the updated manuscripts immediately after the code push is complete.
204 No Content
is the expected response. An error will return a 5xx code.As discussed earlier, Puppet modules can add plugins to be utilized by a Puppet server, such as resource types, functions, providers, and report handlers.
After deploying changes to module plugins, inform Puppet Server of the changes so that it can restart the JRuby instances. This should be added to the deployment process immediately after invalidating the environment cache.
As part of the same deployment automation, utilize the client key created in the previous section for this purpose. Permit this certificate access to restart the JRuby instances in the /etc/puppetlabs/puppetserver/conf.d/auth.conf file. The bolded section shown here should be added prior to the default deny rule:
# Allow code deployment certificate to restart the JRuby instances
{
match-request:
{
path:
"/puppet-admin-api/v1/jruby-pool"
type
:
path
method:
delete
}
allow:
[
code-deployment
]
sort-order:
200
name:
"jruby-pool"
}
,
{
# Deny everything else. This ACL is not strictly
Restart the Puppet Server service in order to activate the change:
[
vagrant@puppetserver
~
]
$
sudo
systemctl
restart
puppetserver
At this point, you should be able to restart the JRuby pool by supplying the code deployment key and certificate with your request. Here’s an example using the curl command:
[jenkins@client ~]$ ln -s .puppetlabs/etc/puppet/ssl ssl [jenkins@client ~]$ curl -i --cert ssl/certs/code-deployment.pem --key ss/private_keys/code-deployment.pem --cacert ssl/certs/ca.pem -X DELETE https://puppet.example.com:8140/puppet-admin-api/v1/jruby-pool HTTP/1.1 204 No Content
Build this request into your code deployment mechanism to ensure that changes to functions, report handlers, and other plugins are visible immediately.
In this chapter, you’ve learned how to:
Don’t limit yourself to just the strategies mentioned here. I’ve covered a few common patterns to provide you with a basic understanding of the choices available, and how to use them. Keep reading, learning, and trying new things.