In this chapter, we will configure and enable the orchestration tool that shipped as part of Puppet 4: the Marionette Collective, or MCollective. You’ll learn how to use MCollective to control the Puppet agent on your nodes.
puppet kick
in the past, you are likely aware that Puppet Labs has obsoleted and has removed support for it in Puppet 4. The MCollective Puppet agent replaces Puppet kick in both the community and Puppet Enterprise product lines, and provides significantly more features and functionality.Puppet Labs provides Yum and Apt repositories containing packages for the MCollective server, clients, and some officially supported plugins. This community repository supplements the OS vendor repositories for the more popular Linux distributions.
Install that repository as follows:
$
sudo
yum
install
http
:
/
/
yum
.
puppetlabs
.
com
/
puppetlabs
-
release
-
el
-
7
.
noarch
.
rpm
Installed
:
puppetlabs
-
release
.
noarch
0
:
7
-
11
Install the MCollective Puppet module from the Puppet Forge like so:
$
cd
/
etc
/
puppetlabs
/
code
/
environments
/
production
production
$
puppet
module
install
jorhett
-
mcollective
-
-
modulepath
=
modules
/
Notice
:
Preparing
to
install
into
environments
/
production
/
modules
.
.
.
Notice
:
Downloading
from
https
:
/
/
forgeapi
.
puppetlabs
.
com
.
.
.
Notice
:
Installing
-
-
do
not
interrupt
.
.
.
/
etc
/
puppetlabs
/
code
/
environments
/
production
/
modules
└
─
─
jorhett
-
mcollective
(
v1
.
2
.
1
)
└
─
─
puppetlabs
-
stdlib
(
v4
.
3
.
0
)
As shown here, the Puppet module installer will pull in the puppetlabs/stdlib
module if you don’t have it already.
Setting up MCollective is quick and easy. For this installation you will create four unique strings for authentication:
The server credentials installed on nodes will allow them to subscribe to command channels, but not to send commands on them. If you use the same credentials for clients and servers, anyone with access to a server’s configuration file will have command control of the entire collective. Keep these credentials separate.
You won’t type these strings at a prompt—they’ll be stored in a configuration file. So we’ll generate long and complex random passwords. Run the following command four times:
$
openssl
rand
-
base64
32
Copy these random strings into your Sticky app or text editor, or write them down somewhere temporarily. We’ll use them in the next few sections to configure the service.
The TLS security plugins can encrypt the transport and provide complete cryptographic authentication. However, the simplicity of the preshared key model is useful to help get you up and running quickly and provides a reasonable level of security for a small installation.
We already did this earlier in the book, but take a moment and verify that you’ve enabled class assignment from Hiera in your environment’s manifest (${environmentpath}/${environment}/manifests/site.pp):
# Look up classes defined in Hiera
lookup
(
'classes'
,
Array
[
String
]
,
'unique'
)
.
include
Put the following Hiera configuration data in the common.yaml file. Note that you’ll be using the random passwords you generated earlier:
# every node installs the server
classes
:
-
mcollective
:
:server
# The Puppet Server will host the middleware
mcollective
:
:hosts
:
-
'puppet.example.com'
mcollective
:
:collectives
:
-
'mcollective'
mcollective
:
:connector
:
'activemq'
mcollective
:
:connector_ssl
:
true
mcollective
:
:connector_ssl_type
:
'anonymous'
# Access passwords
mcollective
:
:server_password
:
'
Server Password
'
mcollective
:
:psk_key
:
'
Pre-shared Salt
'
mcollective
:
:facts
::
cronjob
:
:run_every
:
10
mcollective
:
:server
::
package_ensure
:
'latest'
mcollective
:
:plugin
::
agents
:
puppet
:
version
:
'latest'
mcollective
:
:client
::
unix_group
:
vagrant
mcollective
:
:client
::
package_ensure
:
'latest'
mcollective
:
:plugin
::
clients
:
puppet
:
version
:
'latest'
Every node will install and enable the mcollective::server
. The remaining values identify the type of connection.
I had considered using another Vagrant instance to provide the middleware instance for MCollective, but we’re already burning lots of memory, and honestly the middleware doesn’t require much memory or CPU until you have hundreds of nodes.
Therefore, I recommend that you install the middleware on the puppetserver
VM. Its resource needs are very minimal.
At this point, we’ll need to adjust the firewall on the middleware node. MCollective clients and servers connect to the middleware on TCP port 61614. Let’s allow incoming TCP connections to this port on the Puppet server node:
$
sudo
firewall-cmd
--permanent
--zone
=
public
--add-port
=
61614/tcp
success
$
sudo
firewall-cmd
--reload
success
Now adjust the Hiera data file for this node to enable the extra features. Create a per-host YAML file for the Puppet server named hostname/puppetserver.yaml:
# hostname/puppetserver.yaml
classes
:
-
mcollective
:
:middleware
-
mcollective
:
:client
# Middleware configuration
mcollective
:
:client_password
:
'
Client Password
'
mcollective
:
:middleware
::
keystore_password
:
'
Keystore Password
'
mcollective
:
:middleware
::
truststore_password
:
'
Keystore Password
'
This class assignment enables installation of ActiveMQ middleware on the Puppet server by adding the mcollective::middleware
class. It also installs the MCollective client software with the mcollective::client
class.
Finally, run Puppet to configure this node:
[
vagrant@puppetserver
~
]
$
sudo
puppet
agent
--test
This should configure your middleware node without any problems if the data was entered correctly.
Go to each server in your network and run puppet agent
to configure MCollective. In the virtualized test setup, this would be the client
and dashboard
nodes. Their configuration will be simpler than what you observed on the middleware node:
[
vagrant@dashserver
~
]
$
sudo
puppet
agent
--verbose
Info:
Caching
catalog
for
dashserver.example.com
Info:
Applying
configuration
version
'1441698110'
Notice:
/Mcollective/Package
[
rubygem-stomp
]
/ensure:
created
Info:
Computing
checksum
on
file
/etc/puppetlabs/mcollective/server.cfg
Notice:
/Mcollective::Server/File
[
/etc/puppetlabs/mcollective/server.cfg
]
/content:
content
changed
'{md5}73e68cfd79153a49de6f'
to
'{md5}bb46f5c1345d62b8a62bb'
Notice:
/Mcollective::Server/File
[
/etc/puppetlabs/mcollective/server.cfg
]
/owner:
owner
changed
'vagrant'
to
'root'
Notice:
/Mcollective::Server/File
[
/etc/puppetlabs/mcollective/server.cfg
]
/group:
group
changed
'vagrant'
to
'root'
Notice:
/Mcollective::Server/File
[
/etc/puppetlabs/mcollective/server.cfg
]
/mode:
mode
changed
'0644'
to
'0400'
Info:
/Mcollective::Server/File
[
/etc/puppetlabs/mcollective/server.cfg
]
:
Scheduling
refresh
of
Service
[
mcollective
]
Notice:
/Mcollective::Server/Mcollective::Plugin::Agent
[
puppet
]
/
Package
[
mcollective-puppet-agent
]
/ensure:
created
Info:
/Mcollective::Server/Mcollective::Plugin::Agent
[
puppet
]
/
Package
[
mcollective-puppet-agent
]
:
Scheduling
refresh
of
Service
[
mcollective
]
Notice:
/Mcollective::Server/Service
[
mcollective
]
/ensure:
ensure
changed
'stopped'
to
'running'
Notice:
/Mcollective::Facts::Cronjob/Cron
[
mcollective-facts
]
/ensure
)
created
Notice:
Applied
catalog
in
17.02
seconds
At this time, the MCollective service should be up and running, and attempting to connect to the middleware. You should see the node connected to the middleware node on port 61614
:
$
netstat
-an
|
grep
61614
tcp
0
0
192.168.200.10:51026
192.168.200.6:61614
ESTABLISHED
If you are using IPv6, the response may look like this:
$
netstat
-an
-A
inet6
|
grep
61614
tcp
0
0
2001:DB8:6A:C0::200:10:45743
2001:DB8:6A:C0::200:6:61614
ESTABLISHED
mcollective::hosts
array values to DNS hostnames that only provide an IPv4 address.If you don’t see an established connection, ensure that you’ve made the firewall change documented in the previous section.
At this point, all of your nodes should be online and connected to the middleware. You can verify that each of them is reachable using the low-level ping
command:
[
vagrant@puppetserver
~
]
$
mco
ping
dashserver.example.com
time
=
182.17
ms
client.example.com
time
=
221.34
ms
puppetmaster.example.com
time
=
221.93
ms
----
ping
statistics
----
3
replies
max:
221.93
min:
182.17
avg:
208.48
If you get back a list of each server connected to your middleware and its response time, then congratulations! You have successfully created a collective using Puppet.
If this does not work, there are only three things that can be wrong. They are listed here in the order of likelihood:
puppetserver
VM. Follow the firewall configuration steps in “Enabling the Middleware” to resolve this issue.puppetserver
) is overriding the server_password
value to something different. It is easiest and best to only define that value in the common.yaml file so that all nodes share the same value.mco
command. Ensure that the same value for client_password
is used on both the client and the middleware host. You can also place this in the common file to ensure consistency.In the virtualized environment we’ve created for learning, this will “just work.” In a mixed-vendor environment, you may have more problems, but you can identify and resolve all of them by reading the logfiles. If necessary, change the log level to debug
in common.yaml, as shown here:
mcollective
:
:client
::
loglevel
:
'debug'
mcollective
:
:server
::
loglevel
:
'debug'
You’ll find that debug logs contain details of the inner workings of each layer of MCollective.
You only need to install the client software on systems from which you will be sending requests. In a production environment, this may be your management hosts, or a bastion host, or it could be your laptop or desktop systems in the office. In the virtualized test environment, we have enabled the MCollective client on the puppetserver
node.
If you would like to enable the client commands on the client
node, then create a per-host Hiera data file for it, such as /etc/puppetlabs/code/hieradata/hostname/client.yaml:
# Client configuration
classes
:
-
mcollective
:
:client
mcollective
:
:client_password
:
'
Client Password
'
These two settings are all you need to enable the MCollective clients on the node you selected. Run Puppet agent in test mode to see the changes made:
$
sudo
puppet
agent
--test
Info:
Retrieving
pluginfacts
Info:
Retrieving
plugin
Info:
Loading
facts
Info:
Caching
catalog
for
client.example.com
Info:
Applying
configuration
version
'1441700470'
Info:
Computing
checksum
on
file
/etc/puppetlabs/mcollective/client.cfg
Notice:
/Mcollective::Client/File
[
/etc/puppetlabs/mcollective/client.cfg
]
/content:
content
changed
'{md5}af1fa871fed944e3ea'
to
'{md5}2846de8aa829715f394c49f04'
Notice:
/Mcollective::Client/File
[
/etc/puppetlabs/mcollective/client.cfg
]
/owner:
owner
changed
'vagrant'
to
'root'
Notice:
/Mcollective::Client/File
[
/etc/puppetlabs/mcollective/client.cfg
]
/mode:
mode
changed
'0644'
to
'0440'
Notice:
/Mcollective::Client/Mcollective::Plugin::Client
[
puppet
]
/
Package
[
mcollective-puppet-client
]
/ensure:
created
Notice:
Applied
catalog
in
7.41
seconds
Tune which group has read access to this file using the following Hiera configuration option:
mcollective
:
:client
::
unix_group
:
'vagrant'
Obviously you won’t use this group on productions systems. I recommend that you choose a limited group of people who you trust to execute commands on every system.
The important thing we want to install is the MCollective Puppet agent. This is automatically installed through the following Hiera values (which we already applied):
mcollective
:
:plugin
::
agents
:
puppet
:
version
:
latest
dependencies
:
-
Package
[
'puppet-agent'
]
mcollective
:
:plugin
::
clients
:
puppet
:
version
:
latest
These configuration values in Hiera will ensure that the Puppet agent plugin is installed on all servers, and the Puppet client plugin is installed on all client machines.
You can install other MCollective agents by adding them to the mcollective::plugin::agents
hash. For each plugin, you can specify the agent version and any dependencies that must be installed first. Many plugins won’t work until the software they provide extensions for is installed. The preceding example declares that the Puppet agent plugin requires the puppet-agent
package to be installed before the plugin.
You can install MCollective client plugins by adding them to the mcollective::plugin::clients
hash. For each plugin, you can specify the client version and any dependencies that must be installed first.
The facts that Puppet gathers from the node can be made available to MCollective. Facts are a hash of key/value strings with details about the node, as covered in “Finding Facts”.
Facts provide a powerful information source, and are useful to filter the list of nodes that should act upon a request.
You populate the /etc/puppetlabs/mcollective/facts.yaml file by giving the mcollective::facts::cronjob::run_every
parameter a value. This enables a cron job schedule that creates the facts.yaml file with all Facter- and Puppet-provided facts:
mcollective
:
:facts
::
cronjob
:
:run_every
:
'10'
The value should be a quoted String
value containing the number of minutes between updates to the file. The Hiera data you added to the hieradata/common.yaml file a few pages back set this to 10
for all nodes.
Take a look at the generated file, and you’ll find all the facts stored in YAML format for MCollective to use. You can also use an inventory
request and read through the output to see all facts available on a node:
$
mco
inventory
client.example.com
|
awk
'/Facts:/'
,
'/^$/'
Facts:
architecture
=
>
x86_64
augeasversion
=
>
1.0.0
bios_release_date
=
>
01/01/2007
bios_vendor
=
>
Seabios
...etc...
You can query for how many nodes share the same value for facts. For example, every node shown here (from a different testlab) has the hostname
fact, but only three nodes have the kernel
fact:
$
mco
facts
operatingsystem
Report
for
fact:
kernel
Linux
found
2
times
FreeBSD
found
1
times
Finished
processing
3
/
3
hosts
in
61.45
ms
$
mco
facts
hostname
Report
for
fact:
hostname
fireagate
found
1
times
geode
found
1
times
tanzanite
found
1
times
sunstone
found
1
times
Finished
processing
5
/
5
hosts
in
68.38
ms
Tanzanite is a Windows operating system that doesn’t have the kernel
fact.
Now that MCollective is installed, the fun begins. In this section, you will:
You will be amazed at the level of control and immediacy that MCollective gives you over nodes. MCollective enables new ways of using Puppet that simply aren’t possible from agent, cron-run, or even command-line usage of Puppet.
One of the commands built into the MCollective client is an inventory
query. This command allows you to see how a given server is configured: what collectives it is part of, what facts it has, what Puppet classes are applied to it, and the server’s running statistics.
Run this command against one of your nodes and examine the output:
$ mco inventory client.example.com Inventory for client.example.com: Server Statistics: Version: 2.8.2 Start Time: 2015-09-08 10:44:09 +0000 Config File: /etc/puppetlabs/mcollective/server.cfg Collectives: mcollective Main Collective: mcollective Process ID: 20896 Total Messages: 4 Messages Passed Filters: 4 Messages Filtered: 0 Expired Messages: 0 Replies Sent: 3 Total Processor Time: 1.43 seconds System Time: 0.61 seconds Agents: discovery puppet rpcutil Data Plugins: agent collective fact fstat puppet resource Configuration Management Classes: mcollective mcollective::facts::cronjob mcollective::params mcollective::server puppet4 puppet4::agent puppet4::params puppet4::user settings Facts: augeas => {"version"=>"1.4.0"} disks => {"size"=>"20 GiB", "size_bytes"=>"21474836480", "vendor"=>"ATA"} dmi => {"bios"=>{"vendor"=>"innotek GmbH", "version"=>"VirtualBox"} …snip…
You can create reports from the inventory service as well. Write a short Ruby script to output the values, such as this one:
$
cat
inventory.mc
# Format: hostname: architecture, operating system, OS release.
inventory
do
format
"%20s: %8s %10s %-20s"
fields
{
[
identity,
facts
[
"os.architecture"
]
,
facts
[
"operatingsystem"
]
,
facts
[
"operatingsystemrelease"
]
]
}
end
Now call the inventory
command with the --script
option and the name of the Ruby script, as shown here:
$
mco
inventory
--script
inventory.mc
geode:
x86_64
CentOS
6.4
sunstone:
amd64
Ubuntu
13.10
heliotrope:
x86_64
CentOS
6.5
tanzanite:
x86_64
Windows
7
Ultimate
SP1
fireagate:
amd64
FreeBSD
9.2-RELEASE
This can be very useful for creating reports of your managed nodes.
I took this output from a different test lab, as the virtual machines we created in this book are sadly uniform in nature.
You can now query the Puppet agent status of any node using the MCollective client. First, let’s get a list of nodes that have the MCollective Puppet agent installed. The example shown here uses the find
command to identify nodes that have the agent:
$
mco
find
--with-agent
puppet
client.example.com
dashserver.example.com
puppetserver.example.com
Now ask the Puppet agents what they are doing:
$
mco
puppet
count
Total
Puppet
nodes:
3
Nodes
currently
enabled:
3
Nodes
currently
disabled:
0
Nodes
currently
doing
puppet
runs:
0
Nodes
currently
stopped:
3
Nodes
with
daemons
started:
3
Nodes
without
daemons
started:
0
Daemons
started
but
idling:
3
Finally, let’s get a graphical summary of all nodes:
$
mco
puppet
summary
Summary
statistics
for
3
nodes:
Total
resources:
▇▁▁▁▁▁▁▁▄▁▁▁▁▁▁▁▁
min:
25.0
max:
39.0
Out
Of
Sync
resources:
▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁
min:
0.0
max:
0.0
Failed
resources:
▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁
min:
0.0
max:
0.0
Changed
resources:
▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁
min:
0.0
max:
0.0
Config
Retrieval
time
(
seconds
)
:
▇▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁
min:
0.9
max:
1.2
Total
run-time
(
seconds
)
:
▇▁▁▁▁▄▁▁▁▁▁▁▁▁▁▁▁
min:
1.3
max:
9.4
Time
since
last
run
(
seconds
)
:
▇▁▁▁▁▁▁▁▁▁▁▇▁▁▁▁▇
min:
244.0
max:
1.6k
You’ll notice that Puppet runs very quickly on these nodes, as they have few resources involved. The two modules you enabled have only a few dozen resources. A production deployment will usually have much longer runtimes and thousands of resources applied.
During maintenance you may want to disable the Puppet agent on certain nodes. When you disable the agent, you can add a message letting others know why you did so:
$
mco
puppet
disable
--with-identity
client.example.com
message
=
"Disk failed"
*
[
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
>
]
1
/
1
Summary
of
Enabled:
disabled
=
1
Finished
processing
1
/
1
hosts
in
85.28
ms
If someone tries to run Puppet on the node, they’ll get a message back telling them why Puppet was disabled on the node:
$
mco
puppet
runonce
--with-identity
client.example.com
*
[
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
>
]
1
/
1
heliotrope
Request
Aborted
Puppet
is
disabled:
'Disk failed'
Summary:
Puppet
is
disabled:
'Disk failed'
Finished
processing
1
/
1
hosts
in
84.22
ms
Re-enabling the Puppet agent is just as easy:
$
mco
puppet
enable
--with-identity
client.example.com
*
[
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
=
>
]
1
/
1
Summary
of
Enabled:
enabled
=
1
Finished
processing
1
/
1
hosts
in
84.36
ms
You can easily apply these commands to enable or disable the Puppet agent on multiple nodes matching a filter criteria, as discussed in “Limiting Targets with Filters”.
The MCollective Puppet agent provides powerful control over the Puppet agent. The simplest invocation is naturally to tell Puppet agent to evaluate the catalog immediately on one node:
$ mco puppet runonce --with-identity client.example.com * [ ==================================================> ] 1 / 1 Finished processing 1 / 1 hosts in 193.99 ms $ mco puppet status --with-identity client.example.com * [ ==================================================> ] 1 / 1 client.example.com: Currently idling; last completed run 02 seconds ago Summary of Applying: false = 1 Summary of Daemon Running: running = 1 Summary of Enabled: enabled = 1 Summary of Idling: true = 1 Summary of Status: idling = 1 Finished processing 1 / 1 hosts in 86.43 ms
If you examine the help text for mco puppet
, you’ll find the same options for controlling Puppet as you have for puppet agent
or puppet apply
:
$ mco puppet --help …snip… Application Options --force Bypass splay options when running --server SERVER Connect to a specific server or port --tags, --tag TAG Restrict the run to specific tags --noop Do a noop run --no-noop Do a run with noop disabled --environment ENV Place the node in a specific environment for this run --splay Splay the run by up to splaylimit seconds --no-splay Do a run with splay disabled --splaylimit SECONDS Maximum splay time for this run if splay is set --ignoreschedules Disable schedule processing --rerun SECONDS When performing runall do so repeatedly with a minimum run time of SECONDS
What if you had an emergency patch for Puppet to fix a security problem with the /etc/sudoers file? If you simply updated Puppet data, the change would be applied gradually over 30 minutes, as you can see from the results of this command:
$ mco puppet status --wf operatingsystem=CentOS * [ ==================================================> ] 3 / 3 client.example.com: Currently idling; last completed run 2 minutes ago puppetserver.example.com: Currently idling; last completed run 16 minutes ago dashserver.example.com: Currently idling; last completed run 25 minutes ago
To make this emergency change get applied ASAP on all CentOS nodes, you could use the following MCollective command:
$ mco puppet runonce --tags=sudo --with-fact operatingsystem=CentOS * [ ==================================================> ] 3 / 3 Finished processing 3 / 3 hosts in 988.26 ms
So, how’s .98 seconds for fast?
What risks do you run commanding every node to run Puppet at the same time? In server-based environments, the Puppet servers could be overloaded by simultaneous Puppet agent catalog requests.
You may need to limit the number of hosts that provide a service from evaluating their policies at the same time, to prevent too many of them being out of service simultaneously.
Here is any example where we slow-roll Puppet convergence, processing only two at a time:
$ mco puppet runall 2 2014-02-10 23:14:00: Running all nodes with a concurrency of 2 2014-02-10 23:14:00: Discovering enabled Puppet nodes to manage 2014-02-10 23:14:03: Found 39 enabled nodes 2014-02-10 23:14:06: geode schedule status: Signaled the running Puppet 2014-02-10 23:14:06: sunstone schedule status: Signaled the running Puppet 2014-02-10 23:14:06: 37 out of 39 hosts left to run in this iteration 2014-02-10 23:14:09: Currently 2 nodes applying the catalog; waiting for less 2014-02-10 23:14:17: heliotrope schedule status: Signaled the running Puppet 2014-02-10 23:14:18: 36 out of 39 hosts left to run in this iteration …etc…
Run Puppet on all web servers, up to five at at time:
$ mco puppet runall 5 --with-identity /^webd/
Note that runall
is like batch
except that instead of waiting for a sleep time, it waits for one of the Puppet daemons to complete its run before it starts another. If you didn’t mind some potential overlap, you could always use the batch
options instead:
$ mco puppet --batch 10 --batch-sleep 60 --tags ntp
Filters are used by the discovery plugin to limit which servers are sent a request. Filters can be applied to any MCollective command.
The syntax for filters are documented in the online help, as shown here:
$
mco
help
Host
Filters
-W,
--with
FILTER
Combined
classes
and
facts
filter
-S,
--select
FILTER
Compound
filter
combining
facts
and
classes
-F,
--wf,
--with-fact
fact
=
val
Match
hosts
with
a
certain
fact
-C,
--wc,
--with-class
CLASS
Match
hosts
with
a
certain
Puppet
class
-A,
--wa,
--with-agent
AGENT
Match
hosts
with
a
certain
agent
-I,
--wi,
--with-identity
IDENT
Match
hosts
with
a
certain
configured
identity
There are long and short versions of every filter option. We’re going to use the long versions throughout the documentation because they are easier to read on the page, and easier to remember.
Here are some examples of using host filters. Each one outputs a list of hosts that match the criteria. These are good to run before executing a command, to ensure that you are matching the list of hosts you expect to match. In our first example, we’ll find all hosts whose identity
(FQDN) matches a regular expression:
$ mco find --with-identity /serv/ dashserver.example.com puppetserver.example.com
List all hosts that apply the Puppet class mcollective::client:
$ mco find --with-class mcollective::client puppetserver.example.com
Show all hosts whose facts report that they are using the Ubuntu operatingsystem
:
$ mco find --with-fact operatingsystem=Ubuntu
Whoops, no results. There are no Ubuntu hosts in this test environment. Show all hosts that have the puppet agent installed on them:
$ mco find --with-agent puppet client.example.com dashserver.example.com puppetserver.example.com
There are two types of combination filters. The first type combines Puppet classes and Facter facts. Following is an example where we find all CentOS hosts with the Puppet class nameserver applied to them:
$ mco find --with "/nameserver/ operatingsystem=CentOS"
The second type is called a select
filter and is the most powerful filter available. This filter allows you to create searches against facts and Puppet classes with complex Boolean logic. This is the only filter where you can use the operands and
and or
. You can likewise negate terms using not
or !
.
For example, find all CentOS hosts that are not in the test environment:
$ mco find --select "operatingsystem=CentOS and not environment=test"
The final example matches virtualized hosts with either the httpd
or nginx
Puppet class applied to them. This combination search is only possible with the select
filter type:
$ mco find --select "( /httpd/ or /nginx/ ) and is_virtual=true"
select
filter will always use the mc
discovery plugin, even if a different plugin is requested or configured.MCollective supports multiple discovery plugins, including lookup from MongoDB, MySQL, and other big data solutions. Not all filters are supported by every discovery method. Consult the documentation for a discovery method to determine which filters are available.
In addition to mc
’s dynamic discovery filters, you can specify which nodes to make requests of using a file with one identity (FQDN) per line:
$
mco
puppet
runonce
--disc-method
flatfile
--disc-option
/tmp/list-of-hosts.txt
You can also pipe the list of nodes to the command:
$
cat
list-of-hosts.txt
|
mco
puppet
runonce
--disc-method
stdin
flatfile
or stdin
discovery methods, only the identity filter can be used.These can be very useful when you have a list of nodes generated from a database, or collected from another query, which need to be processed in order. In most other situations, manually building a list of targets is a waste of time. You will find it much easier to use MCollective filters to target nodes dynamically.
By default, every node that matches the filter will respond to the request immediately. You may want to limit how many servers receive the request, or how many process it concurrently.
Following are command options to control how many servers receive the request in a batch, and how much time to wait between each batch.
The --one
argument requests a response from a single (effectively random) node:
$
mco
puppet
status
--one
dashserver.example.com:
Currently
idling
;
last
completed
run
12
minutes
ago
The --limit
argument can specify either a fixed number of servers or a percentage of the servers matching a filter:
$
mco
puppet
status
--limit
2
client.example.com:
Currently
idling
;
last
completed
run
20
minutes
ago
puppetserver.example.com:
Currently
idling
;
last
completed
run
4
minutes
ago
Here’s an example asking for one-third of the nodes with the mcollective
Puppet class applied to them to run a command to return their FQDN:
$
mco
shell
run
"hostname --fqdn"
--limit
33%
--with-class
webserver
It’s also possible to process systems in batches. Specify both a batch size and a time period before initiating the next batch. In the example shown here, we run the Puppet agent on five German servers every 30 seconds:
$ mco puppet runonce --batch 5 --batch-sleep 30 --with-fact country=de
In this example, we upgrade the sudo
package in batches of 10 nodes spaced two minutes apart:
$
mco
package
upgrade
sudo
--batch
10
--batch-sleep
120
The MCollective Puppet agent enables you to interact instantly with a node’s resources using Puppet’s Resource Abstraction Layer (RAL). You express a declarative state to be ensured on the node, with the same resource names and attributes as you would use in a Puppet manifest.
For example, if you wanted to stop the httpd
service on a node, you could do the following:
$ mco puppet resource service httpd ensure=stopped --with-identity /dashserver/ * [ ==================================================> ] 1 / 1 dashserver.example.com Changed: true Result: ensure changed 'running' to 'stopped' Summary of Changed: Changed = 1 Finished processing 1 / 1 hosts in 630.99 ms
You could also fix the root
alias on every host at once:
$ mco puppet resource mailalias root recipient=[email protected]
You should obviously limit actions in all the ways specified in “Limiting Targets with Filters”. For example, you probably only want to stop Apache on hosts where it is not being managed by Puppet:
$ mco puppet resource service httpd ensure=stopped --wc !apache
By default, no resources can be controlled from MCollective. The feature is enabled in the MCollective configuration, but it has an empty whitelist by default.
These are the default configuration options:
mcollective
:
:server
::
resource_type_whitelist
:
'none'
mcollective
:
:server
::
resource_type_blacklist
:
null
To allow resource control, define the preceding configuration values in your Hiera data. Add a list of resources to be controlled to the whitelist, as follows:
mcollective
:
:server
::
resource_type_whitelist
:
'
package,service
'
You can also define a blacklist of resources that should be immune to MCollective tampering:
mcollective
:
:server
::
resource_type_blacklist
:
'user,group,exec'
MCollective does not allow you to mix whitelists and blacklists. One of the preceding values must be undefined, or null
.
A resource declared in the Puppet catalog should not be controlled from MCollective, so as to prevent MCollective from making a change against the Puppet policy. Alternate values specified for a resource in the Puppet catalog are most likely to be overwritten the next time Puppet agent converges the run. In a worse case, well… sorry about the foot.
To allow MCollective to alter resources defined in the node’s Puppet catalog, enable the allow_managed_resources
configuration option:
mcollective
:
:server
::
allow_managed_resources
:
true
If you are (rightly) scared of breaking a resource that Puppet controls, and the damage to your foot that this tool is capable of, the best protection would be the following change:
mcollective
:
:server
::
allow_managed_resources
:
false
Right before this book went to press, Puppet Labs released Puppet Enterprise 2015.3, which contains the first implementation of Puppet Application Orchestration. Application Orchestration handles the deployment and management of applications that span multiple nodes in a Puppet environment.
Without Puppet Application Orchestration, an organization must write Puppet modules that export shared data to PuppetDB, and then use MCollective to kick off Puppet runs on dependent nodes to use the data. This works quite well, and can be finely tuned to operate seamlessly, but requires the organization to develop this automation itself.
Puppet Application Orchestration extends the Puppet configuration language to allow declaration of environment-wide application resources that span multiple nodes. The application instance declares which nodes provide the service, and the order in which they should be evaluated. The puppet job
command is used to kick off the multinode convergence of an application.
You can find more information at “Application Orchestration” on the Puppet Labs site. An example workflow can also be found on the Puppet docs site, at “Application Orchestration Workflow”.
Without making use of Application Orchestration, you could use the puppet job
tool to run Puppet on all nodes in an environment. It does not provide the powerful filters available in MCollective, but can be useful regardless:
$
puppet
job
run
--concurrency
1
New
job
id
created:
7
Started
puppet
run
on
client.example.com
...
Finished
puppet
run
on
client.example.com
-
Success!
Applied
configuration
version
1451776604
Resource
events:
0
failed
3
changed
27
unchanged
0
skipped
0
noop
Started
puppet
run
on
puppetserver.example.com
...
This book does not cover Application Orchestration, as it is exclusive to Puppet Enterprise at this time, and likely to evolve very quickly. Later updates to this book will include more details if this functionality is released for Puppet Open Source.
In my opinion, Puppet Application Orchestration is a welcome and useful addition to Puppet, but does not replace all of the orchestration features provided by MCollective. I suspect most sites will use both together to leverage the strengths of each.
MCollective is capable of far more than just managing Puppet agents. It provides an orchestration framework that complements your Puppet installation. Puppet takes many steps to ensure that thousands of details are correct on each node. MCollective makes small things happen instantly or according to schedule on hundreds or thousands of nodes at exactly the same time.
Don’t limit MCollective in your mind to only puppets dancing on your strings. You can build MCollective agents and plugins that act on events or triggers specific to your needs. Consider a fishing model where the marionette holds the strings cautiously, waiting for the strings to go taut. I’ve built autohealing components that listen passively to server inputs. When the event trigger occurs, the MCollective agent sends a message. Other agents respond to that message by taking action to correct the problem without any human involvement.
Every team I’ve talked to who has implemented MCollective has found it useful beyond their original expectations. Administrators find themselves wondering how they ever got along without it before.
The MCollective Puppet module has dozens of parameters available to customize the MCollective deployment. The module can create and manage a large distributed network of brokers. After doing the simplified installation used in this book, take some time to read the documentation for the module.
You can learn more about how to use, scale, and tune MCollective with Learning MCollective, shown in Figure 30-1.
You might even recognize the author.
1 This is being tracked in Feature Request MCOP-268.