r10k provides a general purpose toolset for deploying Puppet environments and modules. It implements the Puppetfile format and provides a native implementation of Puppet dynamic environments.
— https://github.com/adrienthebo/r10k
To translate that into English, r10k
takes all the work out of managing a collection of puppet modules and their dependencies on GitHub. If you’d like to deploy the Learning MCollective test environment (exactly as I used it when writing this book) in a fresh new environment, this is the fastest way to do it.
If you don’t have r10k
installed yet, let’s do this first. Install it directly from the gem:
$ sudo gem install r10k
Successfully installed colored-1.2
Successfully installed cri-2.5.0
Successfully installed systemu-2.5.2
Successfully installed log4r-1.1.10
Successfully installed multi_json-1.8.4
Successfully installed json_pure-1.8.1
Successfully installed multipart-post-1.2.0
Successfully installed faraday-0.8.9
Successfully installed faraday_middleware-0.9.1
Successfully installed faraday_middleware-multi_json-0.0.5
Successfully installed r10k-1.2.1
11 gems installed
If you are using Ruby 1.8 then you may see errors like this when you run r10k
:
Faraday: you may want to install system_timer for reliable timeouts
If so, install the gem specified.
$ sudo gem install system_timer
Building native extensions. This could take a while...
Successfully installed system_timer-1.2.4
1 gem installed
Now that r10k
is installed you can proceed with using it to install the MCollective module. The following command will set up all of the modules in this book
$wget https://raw.githubusercontent.com/jorhett/learning-mcollective/master/r10k.yaml
$r10k deploy -c r10k.yaml environment learning_mcollective -p
This will install files in a learning_mcollective environment, it will not affect your production environment. You’re going to need to make some changes to these files before they will work. Edit the files in /etc/puppet/environments/learning_mcollective/hieradata/ to:
openssl rand -base64 32
and put these values in the _password
fields in the files.
If you are using a version of Puppet below 3.5.0 you will need changes like the following in the puppet.conf file to use dynamic environments.
[main] modulepath = $confdir/environments/$environment/modules:$confdir/modules [master] hiera_config = $confdir/environments/$environment/hiera/hiera.yaml manifest = $confdir/environments/$environment/manifests/site.pp
More documentation about (pre v3.5) dynamic environments can be found at http://docs.puppetlabs.com/guides/environment.html. Documentation for v3.5’s new directory environments can be found at http://docs.puppetlabs.com/puppet/latest/reference/environments.html.
After you have made these changes, you can test out the module using this puppet command:
$ puppet agent --test --environment learning_mcollective
Puppet Labs also provides an MCollective module on the Puppet Forge at https://github.com/puppetlabs/puppetlabs-mcollective. We didn’t cover this module in the book for the following reasons:
The module provided in this book allowed a simple setup to work immediately, and then for you to add more and more as you read each chapter in the book.
The Puppet Labs module does do things a little different, and you should take a look.
Now that you are proficient with MCollective, here is a baseline configuration that we found which worked properly and isn’t documented as clearly in the module itself.
class { '::mcollective::common::setting': connector => activemq, middleware_hosts => ['activemq.example.net'], middleware_user => 'server', middleware_password => 'IamAServerLaLaLa
', middleware_admin_user => 'admin', middleware_admin_password => 'IAmAClientHoHoHo
', securityprovider => 'psk', psk => 'DearGnuChangeMe', } node 'activemq.example.net' { class { '::mcollective': middleware => true } } node 'server.example.net' { class { '::mcollective': } } node 'client.example.net' { class { '::mcollective': client => true } }
You are smart enough to change the passwords listed above, aren’t you? Remember that openssl rand -base64 32
is your friend.
Here is a complete ActiveMQ server configuration based on the instructions provided in Chapter 2.
/etc/activemq/activemq.xml.
<beans
xmlns=
"http://www.springframework.org/schema/beans"
xmlns:amq=
"http://activemq.apache.org/schema/core"
xmlns:xsi=
"http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation=
"http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans-2.0.xsd
http://activemq.apache.org/schema/core
http://activemq.apache.org/schema/core/activemq-core.xsd
http://activemq.apache.org/camel/schema/spring
http://activemq.apache.org/camel/schema/spring/camel-spring.xsd"
>
<broker
xmlns=
"http://activemq.apache.org/schema/core"
brokerName=
"localhost"
useJmx=
"true"
schedulePeriodForDestinationPurge=
"60000"
networkConnectorStartAsync=
"true"
>
<destinationPolicy>
<policyMap>
<policyEntries>
<!-- MCollective works best with producer flow control disabled. -->
<policyEntry
topic=
">"
producerFlowControl=
"false"
/>
<!-- MCollective generates a reply queue for most commands.
Garbage-collect these after five minutes to conserve memory. -->
<policyEntry
queue=
"*.reply.>"
gcInactiveDestinations=
"true"
inactiveTimoutBeforeGC=
"300000"
/>
</policyEntries>
</policyMap>
</destinationPolicy>
<managementContext>
<managementContext
createConnector=
"false"
/>
</managementContext>
<plugins>
<statisticsBrokerPlugin/>
<simpleAuthenticationPlugin>
<users>
<!-- <authenticationUser username="admin" password="junkpassword" groups="admins,everyone"/> -->
<authenticationUser
username=
"client"
password=
"generated password #1"
groups=
"servers,clients,everyone"
/>
<authenticationUser
username=
"server"
password=
"generated password #2"
groups=
"servers,everyone"
/>
</users>
</simpleAuthenticationPlugin>
<authorizationPlugin>
<map>
<authorizationMap>
<authorizationEntries>
<!--
<authorizationEntry queue=">" write="admins" read="admins" admin="admins" />
<authorizationEntry topic=">" write="admins" read="admins" admin="admins" />
-->
<authorizationEntry
queue=
"mcollective.>"
write=
"clients"
read=
"clients"
admin=
"clients"
/>
<authorizationEntry
topic=
"mcollective.>"
write=
"clients"
read=
"clients"
admin=
"clients"
/>
<authorizationEntry
queue=
"mcollective.nodes"
read=
"servers"
admin=
"servers"
/>
<authorizationEntry
queue=
"mcollective.reply.>"
write=
"servers"
admin=
"servers"
/>
<authorizationEntry
topic=
"mcollective.*.agent"
read=
"servers"
admin=
"servers"
/>
<authorizationEntry
topic=
"mcollective.registration.agent"
write=
"servers"
read=
"servers"
admin=
"servers"
/>
<!--
The advisory topics are part of ActiveMQ, and all users need access to them.
The "everyone" group is not special; you need to ensure every user is a member.
-->
<authorizationEntry
topic=
"ActiveMQ.Advisory.>"
read=
"everyone"
write=
"everyone"
admin=
"everyone"
/>
</authorizationEntries>
</authorizationMap>
</map>
</authorizationPlugin>
</plugins>
<!--
The systemUsage controls the maximum amount of space the broker will use for messages.
http://docs.puppetlabs.com/mcollective/deploy/middleware/activemq.html#memory-and-temp-usage-for-messages-systemusage
-->
<systemUsage>
<systemUsage>
<memoryUsage>
<memoryUsage
limit=
"20 mb"
/>
</memoryUsage>
<storeUsage>
<storeUsage
limit=
"1 gb"
name=
"foo"
/>
</storeUsage>
<tempUsage>
<tempUsage
limit=
"100 mb"
/>
</tempUsage>
</systemUsage>
</systemUsage>
<transportConnectors>
<transportConnector
name=
"stomp+nio"
uri=
"stomp+nio://0.0.0.0:61613"
/>
</transportConnectors>
</broker>
<!--
Enable web consoles, REST and Ajax APIs and demos.
It also includes Camel (with its web console); see ${ACTIVEMQ_HOME}/conf/camel.xml for more info.
See ${ACTIVEMQ_HOME}/conf/jetty.xml for more details.
-->
<!-- disabled for security, don't enable without reading the ActiveMQ Jetty documentation
<import resource="jetty.xml"/>
-->
</beans>
If you already have RabbitMQ in your environment, or if you need AMQP
support (e.g. for logstash
) then you may want to use RabbitMQ instead of ActiveMQ as the middleware for MCollective.
This book includes a Puppet module which can set up a baseline RabbitMQ instance. You would define the middleware node setup and the following Hiera values as follows:
# Modern policy using the following noderabbitmq.example.net
{ include mcollective::middleware } # Hiera mcollective::hosts: - 'rabbitmq.example.net' mcollective::connector : 'rabbitmq' mcollective::middleware::etcdir : '/etc/rabbitmq' mcollective::middleware::config_file: 'rabbitmq.conf' mcollective::middleware::user : 'rabbitmq' mcollective::middleware::service : 'rabbitmq' # ...or get old school in a Site Policy instead noderabbitmq.example.net
{ class { 'mcollective': connector => 'rabbitmq', hosts => ['rabbitmq.example.net'], } class { 'mcollective::middleware': etcdir => '/etc/rabbitmq', config_file => 'rabbitmq.conf', user => 'rabbitmq', service => 'rabbitmq', } }
The process for installing RabbitMQ varies widely depending on your operating system type. In my experience the versions of RabbitMQ available in your operating system package repositories are generally not the best choice. I recommend that you use the packages provided from RabbitMQ Download Page. In the section labeled Installation Guides are instructions specific to each operating system.
If you are using Linux or Unix the following commands will enable the STOMP connector and the Management Plugins.
$sudo rabbitmq-plugins enable rabbitmq_stomp
$sudo rabbitmq-plugins enable rabbitmq_management
Download the CLI tool and install it somewhere in your path:
$curl http://
$rabbitmq.example.net
:15672/cli/rabbitmqadmin -o rabbitmqadminsudo mv rabbitmqadmin /usr/local/bin/
If you’d like to enable bash completion for rabbitmqadmin then should run the following command.
$ sudo rabbitmqadmin --bash-completion > /etc/bash_completion.d/rabbitmqadmin
The final step is to configure the queues and topics for MCollective.
$sudo rabbitmqadmin declare vhost name=/mcollective
$sudo rabbitmqadmin declare user name=client tags=administrator password=
$password #1
sudo rabbitmqadmin declare permission vhost=/mcollective user=client configure='.*' write='.*' read='.*'
$sudo rabbitmqadmin declare user name=server password=
$password #2
sudo rabbitmqadmin declare permission vhost=/mcollective user=server configure='.*' write='.*' read='.*'
$for collective in mcollective
subcollective1 subcollective2 ...
; do rabbitmqadmin declare exchange --user=client --password=password #1
--vhost=/mcollective name=${collective}_broadcast type=topic rabbitmqadmin declare exchange --user=client --password=password #1
--vhost=/mcollective name=${collective}_directed type=direct done
Testing has indicated that RabbitMQ won’t support reply delivery using queues in a federation. If you are using a federation you will need to configure the clients to receive replies using an Exchange:
$ rabbitmqadmin declare exchange --user=client --password=password #1
--vhost=/mcollective name=mcollective_reply type=direct
Then you would modify the client configuration file as such:
plugin.rabbitmq.use_reply_exchange = true
You can find more specific information about RabbitMQ collectives at http://docs.puppetlabs.com/mcollective/reference/plugins/connector_rabbitmq.html and about RabbitMQ itself at http://www.rabbitmq.com.
Debian and Ubuntu systems have iptables
installed by default, but often without any blocking lines. First check and see if you have configured the firewall. If so, just add a new rule to allow the middleware service to be reached, as follows:
$ sudo iptables --list --line-numbers
Chain INPUT (policy ACCEPT)
num target prot opt source destination
1 ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED
...etc...
Look through the output and find an appropriate line number for this rule:
$ sudo iptables -I INPUT 20
-m state --state NEW -p tcp --source 192.168.200.0/24
--dport 61613 -j ACCEPT
If you have not confirmed the firewall yet, you can set up a very basic firewall that only allows SSH, ICMP, and ActiveMQ as follows:
$sudo iptables -A 10 INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
$sudo iptables -A 20 INPUT -p icmp -j ACCEPT
$sudo iptables -A 30 INPUT -i lo -j ACCEPT
$sudo iptables -A 40 INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT
$sudo iptables -A 50 INPUT -m state --state NEW -p tcp --source
$192.168.200.0/24
--dport 61613 -j ACCEPTsudo iptables -A 9999 INPUT -j REJECT --reject-with icmp-host-prohibited
If all of your servers will fit within a few subnet masks, it is advisable to limit this rule to only allow those subnets. Don’t forget to save that rule to your initial rules file. For Debian and Ubuntu systems you have to manually set up loading and unloading the firewall yourself. Here’s a process that will do that for you:
$sudo "iptables-save > /etc/iptables.rules"
iptables: Saving firewall rules to /etc/sysconfig/iptables:[ OK ] $sudo vim /etc/network/if-pre-up.d/iptables
#!/bin/sh /sbin/iptables-restore < /etc/iptables.rules $sudo chmod +x /etc/network/if-pre-up.d/iptables
More details can be found at https://wiki.debian.org/iptables or https://help.ubuntu.com/community/IptablesHowTo
My apologies, I didn’t include IPv6 specific instructions in this section. The commands are nearly identical to the IPv4 counterparts. You can see fully documented IPv6 examples in Configuring ActiveMQ.
Although Puppet Labs only provides binary packages for Linux systems, I was able to use FreeBSD as a server, client, and middleware broker successfully while writing this book. Following are the configuration steps specific to FreeBSD.
FreeBSD 10 and above use a new package management system. If you are on FreeBSD 9 you will have to make some changes to your system to use the new package manager. I recommend doing this, as it will greatly improve Puppet’s ability to manage packages on your systems.
Details on migrating to the new package manager are at https://wiki.freebsd.org/pkgng. I hope that by the time this book is out Puppet will have the new package manager integrated (see bug PUP-1716) however until then you can install the module from the Forge:
$ puppet module install zleslie/pkgng
Then add the following to your manifests:
if ($::osfamily == 'FreeBSD') { include pkgng Package { provider => pkgng, } }
Altering the Java environment parameters is done with the activemq_javargs
parameter in /etc/rc.conf. Note that FreeBSD cuts the memory of ActiveMQ in half compared to Linux distributions, such that Java is limited to 256M total. You probably want to quadruple this if you have the memory available.
FreeBSD ships with IPFW installed and available in the base system. Unlike iptables
one can mix IPv4 and IPv6 statements in the same configuration. You could use the following steps to add a firewall rule to permit inbound connections to a FreeBSD ActiveMQ middleware host.
$ sudo ipfw list
00010 allow ip from any to any via lo0
00011 deny ip from any to 127.0.0.0/8
00012 deny ip from any to [::1]/8
00020 check-state
00021 allow tcp from any to any out setup keep-state
...etc...
Look through the output and find an appropriate line number for this rule
$sudo ipfw -q add
$31
allow tcp from2001:DB8:6A:C0::/64
to any 61613 insudo ipfw -q add
32
allow tcp from192.168.200.0/24
to any 61613 in
If all of your servers will fit within a few subnet masks, it is advisable to limit this rule to only allow those subnets. Don’t forget to save that rule to your initial rules file and enable it to be read at boot time:
/etc/rc.conf.
firewall_enable="YES" firewall_script="/etc/ipfw.rules"
ipfw -q -f flush # Delete all rules IPF="ipfw -q add " # build rule prefix $IPF 00010 allow ip from any to any via lo0 $IPF 00011 deny ip from any to 127.0.0.0/8 $IPF 00012 deny ip from any to [::1]/8 $IPF 00020 check-state $IPF 00021 allow tcp from any to any out setup keep-state $IPF 00022 allow udp from any to any out keep-state $IPF 00023 allow icmp from any to any $IPF$IPF
31
allow tcp from2001:DB8:6A:C0::/64
to any 61613 in
32
allow tcp from192.168.200.0/24
to any 61613 in
More details can be found at http://www.freebsd.org/doc/handbook/firewalls-ipfw.html
At the time this book was written…
$git clone https://github.com/puppetlabs/mcollective-filemgr-agent.git
Cloning into 'mcollective-filemgr-agent'... remote: Reusing existing pack: 49, done. remote: Total 49 (delta 0), reused 0 (delta 0) Unpacking objects: 100% (49/49), done. Checking connectivity... done $git clone https://github.com/puppetlabs/mcollective-nettest-agent.git
Cloning into 'mcollective-nettest-agent'... remote: Reusing existing pack: 72, done. remote: Total 72 (delta 0), reused 0 (delta 0) Unpacking objects: 100% (72/72), done. Checking connectivity... done $git clone https://github.com/puppetlabs/mcollective-package-agent.git
Cloning into 'mcollective-package-agent'... remote: Reusing existing pack: 110, done. remote: Total 110 (delta 0), reused 0 (delta 0) Receiving objects: 100% (110/110), 28.02 KiB | 0 bytes/s, done. Resolving deltas: 100% (27/27), done. Checking connectivity... done $git clone https://github.com/puppetlabs/mcollective-service-agent.git
Cloning into 'mcollective-service-agent'... remote: Reusing existing pack: 91, done. remote: Total 91 (delta 0), reused 0 (delta 0) Unpacking objects: 100% (91/91), done. Checking connectivity... done $git clone https://github.com/puppetlabs/mcollective-puppet-agent.git
Cloning into 'mcollective-puppet-agent'... remote: Reusing existing pack: 367, done. remote: Counting objects: 18, done. remote: Compressing objects: 100% (16/16), done. Receiving objects: 100% (385/385), 141.90 KiB | 0 bytes/s, done. remote: Total 385 (delta 0), reused 9 (delta 0) Resolving deltas: 100% (130/130), done. Checking connectivity... done
If you’d like to be able to make MCollective requests from your Mac desktop, or even subscribe as a server from your Mac, the process to set this up is pretty easy.
Macs with Mountain Lion
(10.8) or Mavericks
(10.9) come with Ruby 1.8.7 installed, which is good enough. If you have an earlier version of OS X or if you want to install Ruby 1.9.3 then install MacPorts and use the following commands:
For Ruby 1.8.7 $port install ruby rb-rubygem
For Ruby 1.9 $port install ruby19 +nosuffix
The only remaining requirement necessary is to install the stomp gem:
$ gem install stomp
Successfully installed stomp-1.3.2
1 gem installed
Installing ri documentation for stomp-1.3.2...
Installing RDoc documentation for stomp-1.3.2...
At the time of writing there were no packages available for MCollective, however the process to build proper MacOS packages is not difficult.
You will need Xcode installed on one system where you can build the MacOS package to install on the remaining systems. You can get Xcode from Mac App Store: Xcode
Next you should download the latest stable release from GitHub and install it like so.
$curl -sL https://github.com/puppetlabs/marionette-collective/archive/
$2.5.0
.tar.gz -o marionette-collective-2.5.0
.tar.gztar xzf marionette-collective-
$2.5.0
.tar.gzcd marionette-collective-
$2.5.0
bash ext/osx/bldmacpkg .
.................. created: /Users/jorhett/marionette-collective-2.5.0
/mcollective-2.5.0
.dmg
At the time this book was written the script didn’t properly find the version, and you would get packages named mcollective-@[email protected]. My workaround for that problem was to simply edit one file before building the packages:
$$EDITOR lib/mcollective.rb
VERSION="2.5.0
"
You can take these packages and install them on any Mac. Note that you have to manually install the MCollective-Common
package on each machine, the server and client packages won’t include it. Configuring MCollective and using it is identical to any other Unix platform. Here’s a test from my home iMac to a remote co-location facility.
$ sudo $EDITOR /etc/mcollective/client.cfg
Password:
Update the configuration to match your other client systems. Then test just as before:
$ mco ping
sunstone time=52.14 ms
geode time=52.59 ms
fireagate time=52.95 ms
heliotrope time=56.69 ms
---- ping statistics ----
4 replies max: 56.69 min: 52.14 avg: 53.59
At the time I tested, upgrading the MCollective client on my Mac overwrote the previous client configuration file. So make a backup of your configuration files before you perform an upgrade.
You can track the status of this bug at MCO-244 Bug
At the time I wrote this book, the Solaris MCollective servers and clients must be compiled from source. The good news is that contributed Makefiles already exist to make the process easy.
Installing MCollective on Solaris 11 is quite easy.
$pkg install pkg:/runtime/ruby-18
$pkg install system/header
$pkg install developer/gcc-3
$gem install stomp
$gem install json
$wget -q https://github.com/puppetlabs/marionette-collective/archive/
$2.5.0
.tar.gz -O marionette-collective-2.5.0
.tar.gztar xzf marionette-collective-
$2.5.0
.tar.gzcd marionette-collective-
$2.5.0
make -f ext/solaris11/Makefile install
Updates to this process and instructions on how to build IPS packages are available in the ext/solaris11/README file.
Install the following OpenCSW packages to meet the requirements for running MCollective from OpenCSW Solaris packages
Now we build mcollective:
$gem install stomp
$gem install json
$wget -q https://github.com/puppetlabs/marionette-collective/archive/
$2.5.0
.tar.gz -O marionette-collective-2.5.0
.tar.gztar xzf marionette-collective-
$2.5.0
.tar.gzcd marionette-collective-
$2.5.0
/ext/solaris./build
Your client and server configuration files will need to reference the OpenCSW-specific path:
libdir = /opt/csw/share/mcollective/plugins
Updates to this process are available in the ext/solaris/README file.
Windows is not fully supported in the community version of MCollective at this time, but mcollectived and various agents all seem to function naturally. Following is the process to install MCollective on a Windows Server.
Installing Ruby on Windows is very straightforward.
Download
If prompted, choose to Run
rubyinstaller-1.9.3
On the Optional Tasks
screen, select
Finish
Install the RubyGem dependencies by opening Command Prompt
and typing the following three commands. You can find Command Prompt
in the Start Menu
under All Programs → Accessories
.
C:>gem install --no-rdoc --no-ri stomp win32-service sys-admin windows-api
Fetching: stomp-1.3.2.gem (100%) Successfully installed stomp-1.3.2 Fetching: win32-service-0.8.4.gem (100%) Successfully installed win32-service-0.8.4 Fetching: sys-admin-1.6.3.gem (100%) Successfully installed sys-admin-1.6.3 Fetching: win32-api-1.5.1-universal-mingw32.gem (100%) Fetching: windows-api-0.4.2.gem (100%) Successfully installed win32-api-1.5.1-universal-mingw32 Successfully installed windows-api-0.4.2 5 gems installed C:>gem install --no-rdoc --no-ri win32-dir -v 0.3.7
Fetching: windows-pr-1.2.3.gem (100%) Fetching: win32-dir-0.3.7.gem (100%) Successfully installed windows-pr-1.2.3 Successfully installed win32-dir-0.3.7 2 gems installed C:>gem install --no-rdoc --no-ri win32-process -v 0.5.5
Fetching: win32-process-0.5.5.gem (100%) Successfully installed win32-process-0.5.5 1 gem installed C:>exit
At the time of writing there were no windows packages available for MCollective, however the process to install MCollective is quite easy.
C:mcollective
Fix the version string
At the time this book was written the daemon didn’t properly find the version, and you would be told that mcollectived was version @DEVELOPMENT_VERSION@. My workaround for that problem was to simply edit C:mcollectivelibmcollective.rb before taking any other steps.
Change: VERSION=”2.5.0
"
Move the binaries into place using the Command Prompt
again:
C:>cd mcollective
C:mcollective>copy extwindowsmco.bat *.* bin
extwindowsdaemon.bat extwindowsenvironment.bat extwindowsmco.bat extwindowsREADME.md extwindows egister_service.bat extwindowsservice_manager.rb extwindowsunregister_service.bat 7 file(s) copied.
Make copies of the configuration examples to customize.
C:mcollective>cd etc
C:mcollectiveetc>copy client.cfg.dist client.cfg
1 file(s) copied. C:mcollectiveetc>copy server.cfg.dist server.cfg
1 file(s) copied. C:mcollectiveetc>copy facts.yaml.dist facts.yaml
1 file(s) copied. C:mcollectiveetc>exit
Use your favorite editor that supports Unix linefeeds to edit C:mcollectiveetcserver.cfg as follows:
# ActiveMQ Server connector = activemq plugin.activemq.heartbeat_interval = 30 plugin.activemq.pool.size = 1 plugin.activemq.pool.1.host =activemq.example.net
plugin.activemq.pool.1.port = 61613 plugin.activemq.pool.1.user = server plugin.activemq.pool.1.password =Server Password
# Explicitly indicate puppet agent's location plugin.puppet.command = C:Program Files (x86)Puppet LabsPuppetinpuppet.exe # Facts factsource = yaml plugin.yaml = /etc/mcollective/facts.yaml # Security and Connector Plugins securityprovider = psk plugin.psk =Pre-Shared Key
# MCollective daemon settings libdir = C:mcollectiveplugins logfile = C:mcollectivemcollective.log loglevel = info daemonize = 1
Use Notepad or your favorite editor to edit C:mcollectiveetcclient.cfg as follows:
direct_addressing = 1 main_collective = mcollective collectives = mcollective # ActiveMQ Server connector = activemq plugin.activemq.heartbeat_interval = 30 plugin.activemq.pool.size = 1 plugin.activemq.pool.1.host =activemq.example.net
plugin.activemq.pool.1.port = 61613 plugin.activemq.pool.1.user = client plugin.activemq.pool.1.password =Client Password
# Explicitly indicate puppet agent's location plugin.puppet.command = C:Program Files (x86)Puppet LabsPuppetinpuppet.exe # Security and Connector Plugins securityprovider = psk plugin.psk =Pre-Shared Key
# MCollective daemon settings libdir = C:mcollectiveplugins logger_type = console loglevel = warn
Start a Command Prompt as Administrator
Enter the C:mcollectivein
directory and run register_service.bat
C:Windowssystem32>cd mcollectivein
C:mcollectivein>register_service.bat
Service mcollectived installed C:mcollectivein>exit
Right click on “My Computer,” select “Manage”
C:mcollectivein
to your PATH
C:mcollectivein>mco ping
sunstone time=1706.05 ms
heliotrope time=1721.68 ms
fireagate time=1723.63 ms
geode time=1725.59 ms
tanzanite time=1727.54 ms
jade time=1930.66 ms
---- ping statistics ----
6 replies max: 1930.66 min: 1706.05 avg: 1755.86
If you aren’t running Puppet on the windows box, you may want to add some useful static facts to the facts.yaml file. Here’s what I used on my test system:
--- mcollective: 1 architecture: x86_64 operatingsystem: Windows operatingsystemrelease: "7 Ultimate SP1"
At this point you have a fully working MCollective daemon and client on your Windows system. Aside from the differences in the installed paths, every configuration option should work identically to the Linux versions.
An easy way to install and manage multiple versions of Ruby on Linux or Unix environments is to use Ruby Version Manager (RVM)
.
If your operating system does not include ruby in the base OS libraries, or you wish to use a different version, RVM is designed to assist you. This large shell script will set up Ruby on your system in one easy step. The only command you need to run is this:
$ curl -L https://get.rvm.io | bash -s stable --ruby=1.9.3
The backslash before curl
is to prevent an alias for curl from being used. The output of this command will walk you through the installation. If you want more than a simple install of Ruby, you can learn more about installing and using RVM at https://rvm.io/rvm/install