Puppet 实现 LNMT(session_server) 自动化部署的实例

Puppet 实现 LNMT(session_server) 自动化部署的实例

实验环境介绍及准备

本实验计划配备四台主机,均使用CentOS7系统,使用 puppet 及 puppet-server 均为 3.8.7-1.el7.noarch 版,facter程序使用 facter-2.4.6-1.el7.x86_64 版。

由于本实验集群内主机数量较少,主机间角色识别方式采用修改 /etc/hosts 文件方式,在实际生产环境中,若集群内主机数量较多,应使用内部 DNS 服务器。

  • 本实验主机IP及角色分配
主机名(短)  主机名(长)          IP地址          角色分配
node1       node1.achudk.com    172.16.50.1     Master
node7       node7.achudk.com    172.16.50.7     Agent
node11      node11.achudk.com   172.16.50.11    Agent
node12      node12.achudk.com   172.16.50.12    Agent

实验过程实例

首先同步时间

  1. 保证主机间能互相识别角色,修改 /etc/hosts 文件并分发至其他主机
#修改hosts文件
vim /etc/hosts
#修改如下
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
172.16.50.1     node1.achudk.com    node1
172.16.50.7     node7.achudk.com    node7
172.16.50.11    node11.achudk.com   node11
172.16.50.12    node12.achudk.com   node12
#分发hosts文件并确认
scp /etc/hosts root@172.16.50.7:/etc/
scp /etc/hosts root@172.16.50.11:/etc/
scp /etc/hosts root@172.16.50.12:/etc/
  1. 安装程序包

    • Master节点主机
yum install -y puppet-3.8.7-1.el7.noarch.rpm facter-2.4.6-1.el7.x86_64.rpm puppet-server-3.8.7-1.el7.noarch.rpm
  • Agent节点主机
yum install -y puppet-3.8.7-1.el7.noarch.rpm facter-2.4.6-1.el7.x86_64.rpm 
  1. 修改配置文件

    • Master节点
vim /etc/puppet/puppet.conf
#在[main]配置段中增加一项内容,定义部署环境的路径
environmentpath = $confdir/environments
#创建部署环境目录
mkdir -pv /etc/puppet/environments/{development,production,testing}/{manifests,modules}
  • 每个Agent节点
vim /etc/puppet/puppet.conf
#在[main]配置段中增加以下内容
listen = true                   #为了能及时接收到Master主机的配置变化信息,启动监听8139端口
server = node1.achudk.com       #设定Master主机的主机名
environment = production        #设定部署环境

vim /etc/puppet/
#在最后两行之前,增加以下内容
path /run
method save
auth any
allow node7.achudk.com

#以下为文件自带的最后两行内容
path /
auth any
  1. puppet 通过https协议保证安全通信,且采用Master和Agent双向认证,配置双向认证

    • 初始化Master节点:程序会自动生成私钥并自签CA
#启动puppetmaster服务并查看8140端口
systemctl start puppetmaster
ss -tnl
  • 每个Agent节点

为了能够在前台查看执行过程,可将puppetagent服务以非守护进程形式运行,

puppet agent --Server node1.achudk.com -d -v --no-daemonize
  • 在Master节点查看Agent节点发来的CA请求并签署

puppet cert 命令使用方法:

puppet cert <action> [-h|--help] [-V|--version] [-d|--debug] [-v|--verbose] [--digest <digest>] [<host>]
  • 查看请求并签署
puppet cert list --all      #查看全部已签署和未签署的请求,也可以仅查看某主机的情况
puppet cert sign --all      #因本集群配置在私有网段,与公网隔离,故安全性较高,否则禁止使用--all选项,应对某主机单独签证
  • 签署后,再次查看所有证书签署情况,确认本实验分配所有主机的请求已被签署

签署成功的主机前面会有个 “+” 符号

puppet cert list --all
#结果如下:
+ "node11.achudk.com" (SHA256) 7F:E0:E7:2A:E3:2B:CA:8B:C0:F5:BA:D7:18:B9:8C:3A:F2:EB:AE:AB:E7:D6:9D:4B:D8:01:B0:B7:74:99:14:1C
+ "node12.achudk.com" (SHA256) 31:DB:00:DC:BC:4C:D7:16:0C:38:6F:D2:AA:9C:D7:9E:9D:59:6B:2C:36:6D:35:86:90:F0:C2:B8:12:CC:50:F9
+ "node7.achudk.com" (SHA256) 11:DB:AA:5A:CD:E4:A3:A2:F3:47:3D:78:61:2A:B8:FB:E5:6C:17:5F:D6:78:2D:FB:0C:99:13:09:F0:38:15:EC
+ "node1.achudk.com"  (SHA256) C8:4D:B4:91:08:C0:F3:A5:EF:03:CC:0A:C5:7C:53:E7:CC:21:C3:72:2B:66:0F:E5:13:06:A5:85:25:E4:0B:C0 (alt names: "DNS:node7.achudk.com", "DNS:puppet", "DNS:puppet.achudk.com")
  • 停止Agent节点的前台puppetagent进程

    1. 开发本实验所需模块

本实验开发模块在 /root/modules 目录下

  • chrony时间同步服务模块
mkdir -pv /root/modules/chrony/{manifests,files,templates,spec,lib,tests}
vim /root/modules/chrony/manifests/init.pp

class chrony {
    package{'chrony':
        ensure => latest;
    } -> 

    file{'chrony.conf':
        path    =>  '/etc/chrony.conf',
        source  =>  'puppet:///modules/chrony/chrony.conf',
    } ~> 

    service{'chronyd':
        ensure  =>  running,
        enable  =>  true,
    }
}

vim /root/modules/chrony/files/chrony.conf

# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
server 172.16.0.1 iburst
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst

# Ignore stratum in source selection.
stratumweight 0

# Record the rate at which the system clock gains/losses time.
driftfile /var/lib/chrony/drift

# Enable kernel RTC synchronization.
rtcsync

# In first three updates step the system clock instead of slew
# if the adjustment is larger than 10 seconds.
makestep 10 3

# Allow NTP client access from local network.
#allow 192.168/16

# Listen for commands only on localhost.
bindcmdaddress 127.0.0.1
bindcmdaddress ::1

# Serve time even if not synchronized to any NTP server.
#local stratum 10

keyfile /etc/chrony.keys

# Specify the key used as password for chronyc.
commandkey 1

# Generate command key if missing.
generatecommandkey

# Disable logging of client accesses.
noclientlog

# Send a message to syslog if a clock adjustment is larger than 0.5 seconds.
logchange 0.5

logdir /var/log/chrony
#log measurements statistics tracking
  • nginx模块示例
mkdir -pv /root/modules/nginx/{manifests,files,templates,spec,lib,tests}
vim /root/modules/nginx/manifests/init.pp

lass nginx {
    package{'nginx':
        ensure  =>  latest,
    } ->

    service{'nginx':
        ensure  =>  running,
        enable  =>  true,
    }
}
vim /root/modules/nginx/manifests/ngx_proxy.pp

class nginx::ngx_proxy inherits nginx {
    file{'nginx.conf':
        path    =>  '/etc/nginx/conf.d/ngx_proxy.conf',
        source  =>  'puppet:///modules/nginx/ngx_proxy.conf',
        owner   =>  'nginx',
        group   =>  'nginx',
        mode    =>  '0644',
    }

    Package['nginx'] -> File['nginx.conf'] ~> Service['nginx']
}
vim /root/modules/nginx/files/ngx_proxy.conf
upstream tomcatsrvs {
    server node11.achudk.com:8080;
    server node12.achudk.com:8080;
}
server {
    listen       80;
    server_name  node1.achudk.com;

    location / {
        proxy_pass http://tomcatsrvs;
    }
}
  • jdk8模块示例
mkdir -pv /root/modules/jdk8/{manifests,files,templates,spec,lib,tests}
vim /root/modules/jdk8/manifests/init.pp
class   jdk8    {
    package{'jdk8':
        name    =>  'java-1.8.0-openjdk-devel',
        ensure  =>  installed,
    } ->

    file{'java.sh':
        path    =>  '/etc/profile.d/java.sh',
        source  =>  'puppet:///modules/jdk8/java.sh',
        ensure  =>  file,
    }
}
vim /root/modules/jdk8/files/java.sh
export JAVA_HOME=/usr
  • tomcat模块示例
mkdir -pv /root/modules/tomcat/{manifests,files,templates,spec,lib,tests}
vim /root/modules/tomcat/manifests/init.pp
class tomcat {
    package{['tomcat','tomcat-webapps','tomcat-admin-webapps','tomcat-docs-webapp']:
        ensure  =>  installed,
    }

    file{'server.xml':
        path    =>  '/etc/tomcat/server.xml',
        content =>  template('tomcat/server.xml.erb'),
        require =>  Package[['tomcat','tomcat-webapps','tomcat-admin-webapps','tomcat-docs-webapp']],
    }

    file{'tomcat-users.xml':
        source  =>  'puppet:///modules/tomcat/tomcat-users.xml',
        path    =>  '/etc/tomcat/tomcat-users.xml',
        require =>  File['server.xml'],
    }
    exec{'createtest':
        command =>  'mkdir -pv /usr/share/tomcat/webapps/test/{WEB-INF,META-INF,lib,classes}',
        path    =>  '/bin:/sbin:/usr/bin:/usr/sbin',
        creates =>  '/usr/share/tomcat/webapps/test/WEB-INF',
#       owner   =>  'tomcat',
#       group   =>  'tomcat',
        require =>  File['tomcat-users.xml'],
    }

    service{'tomcat':
        ensure  =>  running,
        enable  =>  true,
        restart =>  'systemctl stop tomcat && echo "Please wait seconds" && sleep 2 && systemctl start tomcat',
        subscribe   =>  Exec['createtest'],
    }
}
vim /root/modules/tomcat/manifests/aindex.pp
class tomcat::aindex    inherits tomcat {
    file{'aindex.jsp':
        source  =>  'puppet:///modules/tomcat/aindex.jsp',
        path    =>  '/usr/share/tomcat/webapps/test/index.jsp',
        owner   =>  'tomcat',
        group   =>  'tomcat',
        mode    =>  '0644',
    }

    Exec['createtest'] -> File['aindex.jsp']
}
vim /root/modules/tomcat/manifests/bindex.pp
class tomcat::bindex    inherits tomcat {
    file{'bindex.jsp':
        source  =>  'puppet:///modules/tomcat/bindex.jsp',
        path    =>  '/usr/share/tomcat/webapps/test/index.jsp',
        owner   =>  'tomcat',
        group   =>  'tomcat',
        mode    =>  '0644',
    }

    Exec['createtest'] -> File['bindex.jsp']
}
vim /root/modules/tomcat/templates/server.xml.erb
#在<host>配置段中增加以下一项
<Context path="/test" docBase="test" reloadable="true"/>
vim /root/modules/tomcat/files/tomcat-user.xml
#增加以下几项
<role rolename="admin-gui"/>
 <role rolename="manager-gui"/>
 <user username="tomcat" password="tomcat" roles="manager-gui,admin-gui"/>
vim /root/modules/tomcat/files/aindex.jsp
<html>
    <head><title>Tomcat_A</title></head>
        <body>
        <h1><font color="red">TomcatA.achudk.com</font></h1>
            <table align="centre" border="1">
                <tr>
                <td>Session ID</td>
                <% session.setAttribute("achudk.com","achudk.com"); %>
                <td><%= session.getId() %></td>
                </tr>
                <td>Created on</td>
                <td><%= session.getCreationTime() %></td>
                </tr>
            </table>
        </body>
</html>
vim /root/modules/tomcat/files/bindex.jsp
<html>
    <head><title>Tomcat_B</title></head>
        <body>
        <h1><font color="green">TomcatB.achudk.com</font></h1>
            <table align="centre" border="1">
                <tr>
                <td>Session ID</td>
                <% session.setAttribute("achudk.com","achudk.com"); %>
                <td><%= session.getId() %></td>
                </tr>
                <td>Created on</td>
                <td><%= session.getCreationTime() %></td>
                </tr>
            </table>
        </body>
</html>
  • memcached模块示例
mkdir -pv /root/modules/memcached/{manifests,files,templates,spec,lib,tests}
vim /root/modules/memcached/manifests/init.pp
class memcached {
    package{'memcached':
        ensure  =>  installed,
    }

    file{'javolution-5.4.3.1.jar':
        path    =>  '/usr/share/java/tomcat/javolution-5.4.3.1.jar',
        source  =>  'puppet:///modules/memcached/javolution-5.4.3.1.jar',
        require =>  Package['memcached'],
    }
    file{'memcached-session-manager-1.8.3.jar':
        path    =>  '/usr/share/java/tomcat/memcached-session-manager-1.8.3.jar',
        source  =>  'puppet:///modules/memcached/memcached-session-manager-1.8.3.jar',
        require =>  File['javolution-5.4.3.1.jar'],
    }
    file{'memcached-session-manager-tc7-1.8.3.jar':
        path    =>  '/usr/share/java/tomcat/memcached-session-manager-tc7-1.8.3.jar',
        source  =>  'puppet:///modules/memcached/memcached-session-manager-tc7-1.8.3.jar',
        require =>  File['memcached-session-manager-1.8.3.jar'],
    }
    file{'msm-javolution-serializer-1.8.3.jar':
        path    =>  '/usr/share/java/tomcat/msm-javolution-serializer-1.8.3.jar',
        source  =>  'puppet:///modules/memcached/msm-javolution-serializer-1.8.3.jar',
        require =>  File['memcached-session-manager-tc7-1.8.3.jar'],
    }
    file{'spymemcached-2.11.1.jar':
        path    =>  '/usr/share/java/tomcat/spymemcached-2.11.1.jar',
        source  =>  'puppet:///modules/memcached/spymemcached-2.11.1.jar',
        require =>  File['msm-javolution-serializer-1.8.3.jar'],
    }
    exec{'service':
        command =>  'systemctl start memcached && systemctl enable memcached && systemctl restart tomcat',
        path    =>  '/bin:/sbin:/usr/bin:/usr/sbin',
        subscribe   =>  File['spymemcached-2.11.1.jar'],
    }
}
vim /root/modules/memcached/templates/server.xml.erb
#在<host>配置段中增加以下内容
<Context path="/test" docBase="test" reloadable="true"/>
    <Manager className="de.javakaffee.web.msm.MemcachedBackupSessionManager"
                memcachedNodes="n1:<%= 'node11.achudk.com' %>:11211,n2:<%= 'node12.achudk.com' %>:11211"
                failoverNodes="n1"
                requestUriIgnorePattern=".*\.(ico|png|gif|jpg|css|js)$"
                transcoderFactoryClass="de.javakaffee.web.msm.serializer.javolution.JavolutionTranscoderFactory"
    />
#将以下几个 .jar 类文件放置到制定目录
mv /root/{javolution-5.4.3.1.jar,memcached-session-manager-1.8.3.jar,memcached-session-manager-tc7-1.8.3.jar,msm-javolution-serializer-1.8.3.jar,spymemcached-2.11.1.jar} /modules/memcached/files/
  1. 定义主机清单文件
vim /root/manifests/site.pp

node 'base' {
    include chrony
}

node 'node7.achudk.com' {
    include nginx::ngx_proxy
}

node 'node11.achudk.com' {
    include jdk8
    include tomcat::aindex
    include memcached
}

node 'node12.achudk.com' {
    include jdk8
    include tomcat::bindex
    include memcached
}
  1. 将所有 modules 和 manifest (主机清单) 置于对应的production环境下
mv /root/modlues /etc/puppet/environments/production/modules
mv /root/manifests/site.pp /etc/puppet/environments/production/manifests/
  1. 启动Agent的puppetagent服务
systemctl start puppetagent

从节点会自动同步master节点的所有部署内容

  1. 如果从节点未立即同步部署内容,可使用kick命令来通知所有Agent来同步修改后的部署配置
#在Master节点执行
puppet kick -a
发布了56 篇原创文章 · 获赞 33 · 访问量 9万+
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章