space, → | next slide |
← | previous slide |
d | debug mode |
## <ret> | go to slide # |
c | table of contents (vi) |
f | toggle footer |
r | reload slides |
z | toggle help (this) |
Your infrastructure is never an immutable black box. It is one step in a long iteration.
Just as you don't have to think about capacitors to microwave your burrito, you don't have to think about the intermediate steps to know how you want your server to be.
Talk about choosing the right tool for the right job. For example, scripting ./configure && make && make install
in Puppet is a good sign you should build a package.
~/bin/doit5
may never work again. It's a good thing that the Puppet language is not itself Ruby but I'm not here to start a holy war. Limits in configuration management serve the same purpose as in a templating language: they enforce separation of concerns. In this case, the limits separate process from desired state.
~/bin/doit5
Puppet and Chef work basically the same way from this altitude.
Puppet and Chef again work the same way here. The single difference is in the granularity of dependency declarations. Puppet's are at the resource level. Chef's are at the cookbook (module) level.
package { "foo": ensure => "0.0.0" }
file { "/etc/foo.conf":
content => template("foo.conf.erb"),
owner => "foo",
mode => "600",
ensure => file,
}
Be wary of using latest
on packages you don't build in-house.
class bar {
exec { "apt-get update": }
package { "bar":
require => Exec["apt-get update"],
ensure => latest,
}
file { "/etc/bar.conf":
content => template("bar.conf.erb"),
ensure => file,
}
}
The defaults are good except for a few places that break filesystem standards, which I choose to fix. pluginsync
will be important later. You can customize the master and agents separately using [master]
and [agent]
sections, INI-style.
/etc/puppet/puppet.conf
[main]
logdir=/var/log/puppet
rundir=/var/run/puppet
ssldir=$vardir/ssl
pluginsync=true
server=puppetmaster.example.com
This should exist on the master, too. There's a minor memory hit to running puppet agent
as a daemon but that makes a huge difference to DevStructure users with sometimes just 256 MB of RAM. This cron calls bash
, not sh
to use $RANDOM
which in effect prevents the thundering herd from taking down your Puppet master every half hour. This is a good time to point out the obvious - that not all your servers will always be the same - plan accordingly.
/etc/cron.d/puppet
PATH="/usr/sbin:/usr/bin:/sbin:/bin"
# Remove the line breaks.
*/30 * * * * root bash -c '
sleep $(($RANDOM \% 1800));
puppet agent --certname=$(cat
/etc/puppet/certname)
--no-daemonize --onetime'
puppet --genconfig
/etc/puppet
manifests/site.pp
manifests/site.pp
!import "nodes"
Exec {
path => "/usr/sbin:/usr/bin:/sbin:/bin",
}
Node and class names may collide. "default" is special so I use "base" as my most general class name.
manifests/nodes.pp
!node default { include base }
node www inherits default { include www }
node 'staging.example.com' inherits www {}
node /\.www\.example\.com$/ inherits www {}
base
!modules/base/manifests/init.pp
class base {
package {
"dnsutils": ensure => latest;
"psmisc": ensure => latest;
"strace": ensure => latest;
"sysstat": ensure => latest;
"telnet": ensure => latest;
}
}
www
!modules/www/manifests/init.pp
class www {
package { "nginx":
ensure => "0.7.65-1ubuntu2",
}
}
/var/lib/puppet/ssl
on agents.puppet cert
on master.autosign.conf
foo.example.com
*.www.example.com
*
auth.conf
path ~ ^/catalog/([^/]+)$
method find
allow $1
fileserver.conf
[modulename]
path /foo/bar/baz
allow *
iptables
rules./etc/passwd
entries.autosign
ing?node
definitions.[agent]
certname=foo.www.example.com
default
.[agent]
certname=adhadgsdhsfsdhxcb.example.com
autosign
disabled? Try social engineering.[agent]
certname=admin0.example.com
# certname=dev.example.com
# certname=test.example.com
# certname=corp.example.com
export ssldir=/var/lib/puppet/ssl
export server=puppetmaster.example.com
curl --insecure \
--cert $ssldir/certs/$CERTNAME.pem \
--key $ssldir/private_keys/$CERTNAME.pem \
--cacert $ssldir/ca/ca_crt.pem \
https://$server:8140/$ENV/catalog/db1.example.com
If you do have a signed certificate, you can snoop around using it, too.
export server=puppetmaster.example.com
curl --insecure \
https://$server:8140/$ENV/file_content/$MODULE/$FILE
iptables
iptables -P INPUT ACCEPT
iptables -P OUTPUT ACCEPT
iptables -P FORWARD ACCEPT
iptables -F
iptables -A INPUT -m conntrack \
--ctstate RELATED,ESTABLISHED -j ACCEPT
iptables -A INPUT -i eth1 -p tcp \
-s 10.47.0.0/16 --dport 8140 -j ACCEPT
iptables -A INPUT -i eth1 -p udp \
-s 10.47.0.0/16 --dport 8140 -j ACCEPT
iptables -A INPUT -i lo -j ACCEPT
iptables -A INPUT -j DROP
stunnel
stunnel
(8)$NoSQL
SSL-aware.stunnel
Upstart configdescription "stunnel-redis-client"
start on runlevel [2345]
stop on runlevel [!2345]
respawn
exec /usr/bin/stunnel -f -c -d localhost:6379 \
-r redis.example.com:6381
description "stunnel-redis-server"
start on runlevel [2345]
stop on runlevel [!2345]
respawn
exec /usr/bin/stunnel -f -d 6381 -r localhost:6382
Hostname-specific file paths can mitigate this risk. Leak example: malicious server grabbing the database config or secrets file.
file { "/foo/bar/baz":source => "puppet://foo/bar/baz",content => template("foo/bar/baz"), ensure => file, }
default
nodecertname
speculation.--config
, --manifestdir
, or --manifest
to run different masters listening$extlookup_datadir = "/etc/puppet/extdata"
$extlookup_precedence = [
"%{fqdn}", "%{domain}", "base"]
file { "/foo/bar/baz":
content => extlookup("foobarbaz"),
ensure => file,
}
extlookup
function can retrieve data from external CSV files.[master]
external_nodes=/usr/local/bin/classifier
node_terminus=exec
/usr/local/bin/classifier foo.example.com # Facts are available as YAML in # $vardir/yaml/facts/foo.example.com.yaml
--- !ruby/object:Puppet::Node::Facts
expiration: 2010-09-20 20:27:14.445807
name: &id003 foo.example.com
values:
hardwaremodel: &id002 x86_64
kernelrelease: 2.6.35.1-rscloud
selinux: "false"
sshrsakey: OH HAI
facter | less
---
classes:
- base
- www
environment: production
parameters:
mail_server: mail.example.com
#!/bin/sh
set -e
TMP=$(mktemp -d "$1.XXXXXXXXXX")
ssh-keygen -q -f "$TMP/id_rsa" -b 2048 -N ""
zomg_post_to_the_api "$1" "$(cat "$TMP/id_rsa")"
cat <<EOF
---
classes:
- ssh
parameters:
public_key: $(cat "$TMP/id_rsa.pub")
EOF
class ssh {
file {
"/root/.ssh":
mode => "700",
ensure => directory;
"/root/.ssh/authorized_keys":
content => "$public_key\n",
ensure => file;
}
}
modules/ssh/ manifests/ init.pp lib/puppet/ type/ keygen.rb provider/keygen/ posix.rb
package
is a type.apt
is a provider of packages.class ssh {
keygen { "name-it-whatever": }
}
require 'puppet/type'
Puppet::Type.newtype(:keygen) do
@doc = "ssh-keygen example"
newparam(:whatever, :namevar => true) do
desc "Name it whatever."
end
ensurable do
self.defaultvalues
defaultto :present
end
end
require 'openssl'
require 'puppet/resource'
require 'puppet/resource/catalog'
Puppet::Type.type(:keygen).
provide(:posix) do
desc "ssh-keygen example for POSIX"
defaultfor :operatingsystem => :debian
# Define exists?, create, and destroy.
end
def exists?
File.exists?(
"/root/.ssh/authorized_keys")
end
def destroy
Puppet.warning "No turning back."
raise NotImplementedError
end
def create
key = OpenSSL::PKey::RSA.generate(2048)
zomg_post_to_the_api \
Facter.value(:certname), key
catalog = Puppet::Resource::Catalog.new
catalog.create_resource(:file,
:path => "/root/.ssh",
:mode => "700",
:ensure => :directory
)
catalog.create_resource(:file,
:path => "/root/.ssh/authorized_keys",
:content => "#{key.public_key}\n",
:ensure => :file
)
catalog.apply
end
config.ru
$0 = "master"
ARGV << "--rack"
ARGV << "--certname=#{
File.read("/etc/puppet/certname").chomp}"
require 'puppet/application/master'
# TODO Middleware.
run Puppet::Application[:master].run
require 'base64'
require 'json'
require 'rack/utils'
require 'yaml'
require 'zlib'
class StuckInTheMiddleWithYou
def initialize(app)
@app = app
end
def call(env)
# TODO Preprocessing.
status, headers, body = @app.call(env)
# TODO Postprocessing.
[status, headers, body]
end
end
use StuckInTheMiddleWithYou
params = Rack::Utils.parse_query(env["QUERY_STRING"], "&")
facts = case params["facts_format"]
when "b64_zlib_yaml"
YAML.load(Zlib::Inflate.inflate(Base64.decode64(
Rack::Utils.unescape(params["facts"]))))
end
# Change facts.
if Puppet::Node::Facts === facts
facts.values["foo"] = "bar"
end
params["facts"] = case params["facts_format"]
when "b64_zlib_yaml"
Rack::Utils.escape(Base64.encode64(Zlib::Deflate.deflate(
YAML.dump(facts), Zlib::BEST_COMPRESSION)))
end if facts
env["QUERY_STRING"] = Rack::Utils.build_query(params)
env["REQUEST_URI"] =
"#{env["PATH_INFO"]}?#{env["QUERY_STRING"]}"
object = case headers["Content-Type"]
when /[\/-]pson$/ then JSON.parse(body.body.join)
when /[\/-]yaml$/ then YAML.load(body.body.join)
when "text/marshal" then Marshal.load(body.body.join)
else body.body.join
end
# Change catalog.
if Hash === object && "Catalog" == object["document_type"]
object["data"]["resources"].unshift({
"exported" => false,
"title" => "apt-get update",
"parameters" => {"path"=>"/usr/sbin:/usr/bin:/sbin:/bin"},
"type" => "Exec",
})
end
body = case headers["Content-Type"]
when /[\/-]pson$/ then [JSON.generate(object)]
when /[\/-]yaml$/ then [YAML.dump(object)]
when "text/marshal" then [Marshal.dump(object)]
else [object]
end
headers["Content-Length"] = Rack::Utils.bytesize(body.first)
Mention this is a last resort and that I've fixed bugs in Puppet itself via this method.