Pinpoint 集羣環境部署

前期準備


節點準備

本次節點列表如下:

Ip Hostname 角色
192.168.2.131 pinpointNode1 hbase master節點;NameNode;pinpoint collector;nginx(用於代理pinpoint collector)
192.168.2.132 pinpointNode2 hbase slave節點;DataNode;pinpoint collector;pinpoint web
192.168.2.133 pinpointNode3 hbase slave節點;DataNode;pinpoint collector

pinpoint集羣依賴於hbase集羣,因此需要先搭建好hbase集羣(hbase+zookeeper+Hadoop)。
本次參考Hadoop2.7.3+HBase1.2.5+ZooKeeper3.4.6搭建分佈式集羣環境進行hbase集羣環境搭建的。

安裝java

安裝

cd /home/vagrant
curl -OL http://files.saas.hand-china.com/hitoa/1.0.0/jdk-8u112-linux-x64.tar.gz
tar -xzvf jdk-8u112-linux-x64.tar.gz

配置環境變量

vim /etc/profile

/etc/profile文件末尾追加以下內容:

export JAVA_HOME=/home/vagrant/jdk1.8.0_112
export JRE_HOME=$JAVA_HOME/jre
export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH

使配置生效:

source /etc/profile

驗證:

java -version

如果出現類似如下的輸出,則表示安裝成功:

java version "1.8.0_112"
Java(TM) SE Runtime Environment (build 1.8.0_112-b15)
Java HotSpot(TM) 64-Bit Server VM (build 25.112-b15, mixed mode)

配置Hosts映射

分別在三個節點上添加hosts映射關係:

vim /etc/hosts

更改後配置如下:

#分別註釋各個節點上的此處配置,否則會報錯
#127.0.0.1   pinpointNode1   pinpointNode1
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
#主機ip 主機名
192.168.2.131 pinpointNode1
192.168.2.132 pinpointNode2
192.168.2.133 pinpointNode3

配置集羣節點之間無密碼登錄

CentOS默認安裝了ssh,如果沒有你需要先安裝ssh 。
集羣環境的使用必須通過ssh無密碼登陸來執行,本機登陸本機必須無密碼登陸,主機與從機之間必須可以雙向無密碼登陸,從機與從機之間無限制(從機之間的無密碼登錄配置爲可選的)。

免密登錄本機

下面以配置pinpointNode1本機無密碼登錄爲例進行講解,參照下面的步驟配置另外兩臺子節點機器的本機無密碼登錄;
1)生產祕鑰

ssh-keygen -t rsa

2)將公鑰追加到”authorized_keys”文件

cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

3)賦予權限

chmod 600 .ssh/authorized_keys

4)驗證本機能無密碼訪問

ssh pinpointNode1

master節點免密登錄從節點機器

將master節點pinpointNode1的公鑰複製到pinpointNode2、pinpointNode3 authorized_keys文件中
然後再pinpointNode1節點進行測試:ssh pinpointNode2ssh pinpointNode3
第一次登錄可能需要yes確認,之後就可以直接登錄了。

從節點機器免密登錄主節點機器

將每個從節點的公鑰id_rsa.pub中的內容複製到主節點的authorized_keys文件中。
然後在從節點測試:ssh pinpointNode1
第一次登錄可能需要yes確認,之後就可以直接登錄了。

配置zookeeper集羣


登錄到主節點pinpointNode1

解壓安裝包

tar -xzvf zookeeper-3.4.9.tar.gz

修改配置文件

[vagrant@pinpointNode1 ~]$ cd zookeeper-3.4.9/
[vagrant@pinpointNode1 zookeeper-3.4.9]$ mkdir data
[vagrant@pinpointNode1 zookeeper-3.4.9]$ mkdir logs

更改配置:

[vagrant@pinpointNode1 zookeeper-3.4.9]$ vi conf/zoo.cfg

添加如下內容:

# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/home/vagrant/zookeeper-3.4.9/data
dataLogDir=/home/vagrant/zookeeper-3.4.9/logs

server.1=192.168.2.131:2888:3888
server.2=192.168.2.132:2888:3888
server.3=192.168.2.133:2888:3888
# the port at which the clients will connect
clientPort=2181
MaxSessionTimeout=200000

在data目錄下創建文件myid,寫入數字1

[vagrant@pinpointNode1 zookeeper-3.4.9]$ cd data
[vagrant@pinpointNode1 data]$ vi myid
1

複製配置好的zookeeper到從節點pinpointNode2、pinpointNode3。

[vagrant@pinpointNode1]$ scp -r zookeeper-3.4.9 pinpointNode2:/home/vagrant/
[vagrant@pinpointNode1]$ scp -r zookeeper-3.4.9 pinpointNode3:/home/vagrant/

更改從節點的myid
登錄到pinpointNode2,進入zookeeper的data目錄執行echo 2 > myid;登錄到pinpointNode3,進入zookeeper的data目錄執行echo 3 > myid

啓動zookeeper

分別啓動不同節點的zookeeper:

[vagrant@pinpointNode1 zookeeper-3.4.9]$ ./bin/zkServer.sh start
[vagrant@pinpointNode2 zookeeper-3.4.9]$ ./bin/zkServer.sh start
[vagrant@pinpointNode3 zookeeper-3.4.9]$ ./bin/zkServer.sh start

查看zookeeper狀態

[vagrant@pinpointNode1 zookeeper-3.4.9]$ ./bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /home/vagrant/zookeeper-3.4.9/bin/../conf/zoo.cfg
Mode: follower
[vagrant@pinpointNode2 zookeeper-3.4.9]$ ./bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /home/vagrant/zookeeper-3.4.9/bin/../conf/zoo.cfg
Mode: leader
[vagrant@pinpointNode3 zookeeper-3.4.9]$ ./bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /home/vagrant/zookeeper-3.4.9/bin/../conf/zoo.cfg
Mode: follower

由於zk運行一段時間後,會產生大量的日誌文件,把磁盤空間佔滿,導致整個機器進程都不能活動了,所以需要定期清理這些日誌文件(方法之後再整理)。

Hadoop集羣安裝


配置pinpointNode1的Hadoop環境

解壓

[vagrant@pinpointNode1 ~]$ tar -xzvf  hadoop-2.6.5.tar.gz

配置環境變量

配置環境變量,修改配置文件vi /etc/profile

export HADOOP_HOME=/home/vagrant/hadoop-2.6.5
export PATH=$PATH:$HADOOP_HOME/bin

使得hadoop命令在當前終端立即生效

source /etc/profile

更改Hadoop配置

下面配置,文件都在:/home/vagrant/hadoop-2.6.5/etc/hadoop路徑下
配置core-site.xml
創建目錄/home/vagrant/hadoop-2.6.5/tmpmkdir -p /home/vagrant/hadoop-2.6.5/tmp。更改配置如下:

<configuration>
   <property>
        <name>hadoop.tmp.dir</name>
        <value>file:/home/vagrant/hadoop-2.6.5/tmp</value>
        <description>Abase for other temporary directories.</description>
    </property>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://192.168.2.131:9000</value>
    </property>
</configuration>

特別注意:如沒有配置hadoop.tmp.dir參數,此時系統默認的臨時目錄爲:/tmp/hadoo-hadoop。而這個目錄在每次重啓後都會被刪除,必須重新執行format才行,否則會出錯。
配置hdfs-site.xml
創建文件:

mkdir -p /home/vagrant/hadoop-2.6.5/hdfs/name
mkdir -p /home/vagrant/hadoop-2.6.5/hdfs/data

更改配置:

<configuration>
   <property>
        <name>dfs.replication</name>
        <value>2</value>
    </property>
    <property>
        <name>dfs.name.dir</name>
        <value>file:/home/vagrant/hadoop-2.6.5/hdfs/name</value>
    </property>
    <property>
        <name>dfs.data.dir</name>
        <value>file:/home/vagrant/hadoop-2.6.5/hdfs/data</value>
    </property>
</configuration>

dfs.replication表示數據副本數,一般不大於datanode的節點數。
配置mapred-site.xml
拷貝mapred-site.xml.template爲mapred-site.xml,在進行修改

cp /home/vagrant/hadoop-2.6.5/etc/hadoop/mapred-site.xml.template /home/vagrant/hadoop-2.6.5/etc/hadoop/mapred-site.xml  
vi /home/vagrant/hadoop-2.6.5/etc/hadoop/mapred-site.xml
<configuration>
   <property>
      <name>mapreduce.framework.name</name>
      <value>yarn</value>
   </property>
   <property>
        <name>mapred.job.tracker</name>
        <value>192.168.2.131:9001</value>
    </property>
</configuration>

配置yarn-site.xml

vim /home/vagrant/hadoop-2.6.5/etc/hadoop/yarn-site.xml
<configuration>

<!-- Site specific YARN configuration properties -->
   <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.resourcemanager.hostname</name>
        <value>192.168.2.131</value>
    </property>
</configuration>

修改slaves文件
修改/home/vagrant/hadoop-2.6.5/etc/hadoop/slaves文件,該文件指定哪些服務器節點是datanode節點。刪除locahost,添加所有datanode節點的主機名,如下所示:

192.168.2.132
192.168.2.133

修改hadoop-env.sh

vi /home/vagrant/hadoop-2.6.5/etc/hadoop/hadoop-env.sh

添加JDK路徑,如果不同的服務器jdk路徑不同需要單獨修改:

export JAVA_HOME=/home/vagrant/jdk1.8.0_112

配置從節點的Hadoop環境

pinpointNode1上將配置好的Hadoop複製到pinpointNode2pinpointNode3

scp -r /home/vagrant/hadoop-2.6.5 pinpointNode2:/home/vagrant/
scp -r /home/vagrant/hadoop-2.6.5 pinpointNode3:/home/vagrant/

添加環境變量

vi /etc/profile
## 內容
export HADOOP_HOME= /home/vagrant/hadoop-2.6.5
export PATH=$PATH:$HADOOP_HOME/bin

使得hadoop命令在當前終端立即生效;

source /etc/profile

hbase集羣安裝


解壓安裝包

[vagrant@pinpointNode1 ~]$ unzip hbase-1.2.4-bin.tar.gz

更改配置

配置文件位置/home/vagrant/hbase-1.2.4/conf
hbase-env.sh

vi hbase-env.sh

更改如下部分:

export JAVA_HOME=/home/vagrant/jdk1.8.0_112  # 如果jdk路徑不同需要單獨配置
export HBASE_CLASSPATH=/home/vagrant/hadoop-2.6.5/etc/hadoop #配置hbase找到Hadoop
export HBASE_MANAGES_ZK=false #使用外部的zookeeper

hbase-site.xml

vi hbase-site.xml
<configuration>
  <property>
    <name>hbase.rootdir</name>
    <value>hdfs://192.168.2.131:9000/hbase</value>
  </property>
  <property>
    <name>hbase.master</name>
    <value>192.168.2.131</value>
  </property>
  <property>
    <name>hbase.cluster.distributed</name>
    <value>true</value>
  </property>
  <property>
    <name>hbase.zookeeper.quorum</name>
    <value>192.168.2.131,192.168.2.132,192.168.2.133</value>
  </property>
  <property>
    <name>hbase.zookeeper.property.clientPort</name>
    <value>2181</value>
  </property>
  <property>
    <name>zookeeper.session.timeout</name>
    <value>200000</value>
  </property>
  <property>
    <name>dfs.support.append</name>
    <value>true</value>
  </property>
</configuration>

regionservers

vi regionservers

在 regionservers 文件中添加從節點列表:

192.168.2.132
192.168.2.133

分發並同步安裝包

將整個hbase安裝目錄都拷貝到所有從節點服務器:

scp -r /home/vagrant/hbase-1.2.4 pinpointNode2:/home/vagrant/
scp -r /home/vagrant/hbase-1.2.4 pinpointNode3:/home/vagrant/

啓動集羣


啓動zookeeper
每個節點執行如下命令啓動zookeeper:

/home/vagrant/zookeeper-3.4.9/bin/zkServer.sh start

啓動Hadoop
進入master的/home/vagrant/hadoop-2.6.5目錄,執行以下操作:

./bin/hadoop namenode -format

格式化namenode,第一次啓動服務前執行的操作,以後不需要執行。
然後啓動hadoop:
在master節點執行如下命令即可:

/home/vagrant/hadoop-2.6.5/sbin/start-all.sh

啓動hbase
在master節點執行如下命令即可:

/home/vagrant/hbase-1.2.4/bin/start-hbase.sh

進程列表


集羣啓動成功後可查看進程:
master節點

[vagrant@pinpointNode1 ~]$ jps
10929 QuorumPeerMain
18562 Jps
17033 NameNode
17356 ResourceManager
17756 HMaster
17213 SecondaryNameNode

從節點

[vagrant@pinpointNode2 ~]$ jps
9955 DataNode
11076 HRegionServer
8309 QuorumPeerMain
11592 Jps
10059 NodeManager
[vagrant@pinpointNode3 ~]$ jps
10608 Jps
9878 NodeManager
10087 HRegionServer
8189 QuorumPeerMain
9774 DataNode

web界面查看狀態


Hadoop
訪問http://192.168.2.131:8088/cluster/nodes
集羣節點

hbase
訪問http://192.168.2.131:16010/master-status
hbase集羣狀態

hdfs
訪問http://192.168.2.131:50070/dfshealth.html#tab-overview
數據節點

初始化Pinpoint表結構

首先確保HBase是正常啓動狀態,然後在主節點上執行如下命令:

[vagrant@pinpointNode1 ~]$ ./hbase-1.2.4/bin/hbase shell ./hbase-create.hbase

初始化成功以後,會在HBase中創建16張表。

關於pinpoint集羣部署可參考這篇文章pinpoint部署以及使用指南

安裝pinpoint collector


此次打算安裝pinpoint collector集羣,因此在三個節點上都安裝上pinpoint collector,具體安裝方式以下會有詳細的介紹。
Pinpoint Collector、Pinpoint Web的部署均需要Tomcat,所以先下載Tomcat。

解壓tomcat

[vagrant@pinpointNode1 ~]$ unzip apache-tomcat-8.5.14.zip
[vagrant@pinpointNode1 ~]$ mv apache-tomcat-8.5.14 pinpoint-collector-1.6.2
[vagrant@pinpointNode1 ~]$ cd pinpoint-collector-1.6.2/webapps
[vagrant@pinpointNode1 webapps]$ rm -rf *
[vagrant@pinpointNode1 webapps]$ unzip ~/pinpoint-collector-1.6.2.war -d ROOT          

修改配置文件

修改server.xml

爲了避免端口衝突,更改tomcat端口設置:

[vagrant@pinpointNode1 ~]$ cat pinpoint-collector-1.6.2/conf/server.xml
<?xml version="1.0" encoding="UTF-8"?>
<!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
  this work for additional information regarding copyright ownership.
  The ASF licenses this file to You under the Apache License, Version 2.0
  (the "License"); you may not use this file except in compliance with
  the License.  You may obtain a copy of the License at

      http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License.
-->
<!-- Note:  A "Server" is not itself a "Container", so you may not
     define subcomponents such as "Valves" at this level.
     Documentation at /docs/config/server.html
 -->
<Server port="18005" shutdown="SHUTDOWN">
  <Listener className="org.apache.catalina.startup.VersionLoggerListener" />
  <!-- Security listener. Documentation at /docs/config/listeners.html
  <Listener className="org.apache.catalina.security.SecurityListener" />
  -->
  <!--APR library loader. Documentation at /docs/apr.html -->
  <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" />
  <!-- Prevent memory leaks due to use of particular java/javax APIs-->
  <Listener className="org.apache.catalina.core.JreMemoryLeakPreventionListener" />
  <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" />
  <Listener className="org.apache.catalina.core.ThreadLocalLeakPreventionListener" />

  <!-- Global JNDI resources
       Documentation at /docs/jndi-resources-howto.html
  -->
  <GlobalNamingResources>
    <!-- Editable user database that can also be used by
         UserDatabaseRealm to authenticate users
    -->
    <Resource name="UserDatabase" auth="Container"
              type="org.apache.catalina.UserDatabase"
              description="User database that can be updated and saved"
              factory="org.apache.catalina.users.MemoryUserDatabaseFactory"
              pathname="conf/tomcat-users.xml" />
  </GlobalNamingResources>

  <!-- A "Service" is a collection of one or more "Connectors" that share
       a single "Container" Note:  A "Service" is not itself a "Container",
       so you may not define subcomponents such as "Valves" at this level.
       Documentation at /docs/config/service.html
   -->
  <Service name="Catalina">

    <!--The connectors can use a shared executor, you can define one or more named thread pools-->
    <!--
    <Executor name="tomcatThreadPool" namePrefix="catalina-exec-"
        maxThreads="150" minSpareThreads="4"/>
    -->


    <!-- A "Connector" represents an endpoint by which requests are received
         and responses are returned. Documentation at :
         Java HTTP Connector: /docs/config/http.html
         Java AJP  Connector: /docs/config/ajp.html
         APR (HTTP/AJP) Connector: /docs/apr.html
         Define a non-SSL/TLS HTTP/1.1 Connector on port 8080
    -->
    <Connector port="18080" protocol="HTTP/1.1"
               connectionTimeout="20000"
               redirectPort="18443" />
    <!-- A "Connector" using the shared thread pool-->
    <!--
    <Connector executor="tomcatThreadPool"
               port="8080" protocol="HTTP/1.1"
               connectionTimeout="20000"
               redirectPort="8443" />
    -->
    <!-- Define a SSL/TLS HTTP/1.1 Connector on port 8443
         This connector uses the NIO implementation. The default
         SSLImplementation will depend on the presence of the APR/native
         library and the useOpenSSL attribute of the
         AprLifecycleListener.
         Either JSSE or OpenSSL style configuration may be used regardless of
         the SSLImplementation selected. JSSE style configuration is used below.
    -->
    <!--
    <Connector port="8443" protocol="org.apache.coyote.http11.Http11NioProtocol"
               maxThreads="150" SSLEnabled="true">
        <SSLHostConfig>
            <Certificate certificateKeystoreFile="conf/localhost-rsa.jks"
                         type="RSA" />
        </SSLHostConfig>
    </Connector>
    -->
    <!-- Define a SSL/TLS HTTP/1.1 Connector on port 8443 with HTTP/2
         This connector uses the APR/native implementation which always uses
         OpenSSL for TLS.
         Either JSSE or OpenSSL style configuration may be used. OpenSSL style
         configuration is used below.
    -->
    <!--
    <Connector port="8443" protocol="org.apache.coyote.http11.Http11AprProtocol"
               maxThreads="150" SSLEnabled="true" >
        <UpgradeProtocol className="org.apache.coyote.http2.Http2Protocol" />
        <SSLHostConfig>
            <Certificate certificateKeyFile="conf/localhost-rsa-key.pem"
                         certificateFile="conf/localhost-rsa-cert.pem"
                         certificateChainFile="conf/localhost-rsa-chain.pem"
                         type="RSA" />
        </SSLHostConfig>
    </Connector>
    -->

    <!-- Define an AJP 1.3 Connector on port 8009 -->
    <Connector port="18009" protocol="AJP/1.3" redirectPort="18443" />


    <!-- An Engine represents the entry point (within Catalina) that processes
         every request.  The Engine implementation for Tomcat stand alone
         analyzes the HTTP headers included with the request, and passes them
         on to the appropriate Host (virtual host).
         Documentation at /docs/config/engine.html -->

    <!-- You should set jvmRoute to support load-balancing via AJP ie :
    <Engine name="Catalina" defaultHost="localhost" jvmRoute="jvm1">
    -->
    <Engine name="Catalina" defaultHost="localhost">

      <!--For clustering, please take a look at documentation at:
          /docs/cluster-howto.html  (simple how to)
          /docs/config/cluster.html (reference documentation) -->
      <!--
      <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/>
      -->

      <!-- Use the LockOutRealm to prevent attempts to guess user passwords
           via a brute-force attack -->
      <Realm className="org.apache.catalina.realm.LockOutRealm">
        <!-- This Realm uses the UserDatabase configured in the global JNDI
             resources under the key "UserDatabase".  Any edits
             that are performed against this UserDatabase are immediately
             available for use by the Realm.  -->
        <Realm className="org.apache.catalina.realm.UserDatabaseRealm"
               resourceName="UserDatabase"/>
      </Realm>

      <Host name="localhost"  appBase="webapps"
            unpackWARs="true" autoDeploy="true">

        <!-- SingleSignOn valve, share authentication between web applications
             Documentation at: /docs/config/valve.html -->
        <!--
        <Valve className="org.apache.catalina.authenticator.SingleSignOn" />
        -->

        <!-- Access log processes all example.
             Documentation at: /docs/config/valve.html
             Note: The pattern used is equivalent to using pattern="common" -->
        <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs"
               prefix="localhost_access_log" suffix=".txt"
               pattern="%h %l %u %t &quot;%r&quot; %s %b" />

      </Host>
    </Engine>
  </Service>
</Server>

修改hbase.properties

修改該配置文件裏面的hbase.client.hosthbase.client.port參數即可,該配置文件在每個collector節點中應該保持相同:

[vagrant@pinpointNode1 ~]$ cat pinpoint-collector-1.6.2/webapps/ROOT/WEB-INF/classes/hbase.properties
#zookeeper集羣服務器IP地址
hbase.client.host=192.168.2.131,192.168.2.132,192.168.2.133
#zookeeper端口
hbase.client.port=2181

# hbase default:/hbase
hbase.zookeeper.znode.parent=/hbase

# hbase timeout option==================================================================================
# hbase default:true
hbase.ipc.client.tcpnodelay=true
# hbase default:60000
hbase.rpc.timeout=10000
# hbase default:Integer.MAX_VALUE
hbase.client.operation.timeout=10000

# hbase socket read timeout. default: 200000
hbase.ipc.client.socket.timeout.read=20000
# socket write timeout. hbase default: 600000
hbase.ipc.client.socket.timeout.write=60000

# ==================================================================================
# hbase client thread pool option
hbase.client.thread.max=64
hbase.client.threadPool.queueSize=5120
# prestartAllCoreThreads
hbase.client.threadPool.prestart=false

# enable hbase async operation. default: false
hbase.client.async.enable=false
# the max number of the buffered asyncPut ops for each region. default:10000
hbase.client.async.in.queuesize=10000
# periodic asyncPut ops flush time. default:100
hbase.client.async.flush.period.ms=100
# the max number of the retry attempts before dropping the request. default:10
hbase.client.async.max.retries.in.queue=10

修改pinpoint-collector.properties

修改該配置文件的如下參數,另外兩個collector節點將如下配置文件中的collector相關IP地址替換爲各自的IP即可:

#當前collector節點的IP地址
collector.tcpListenIp=192.168.2.131
#collector監聽的tcp端口(默認9994端口)
collector.tcpListenPort=9994
#當前collector節點的IP地址
collector.udpStatListenIp=192.168.2.131
#collector監聽的udp端口(默認9995)
collector.udpStatListenPort=9995
#當前collector節點的IP地址
collector.udpSpanListenIp=192.168.2.131
#collector監聽的udp端口(默認9996)
collector.udpSpanListenPort=9996
#因爲是部署collector集羣,因此將此參數設置爲true
cluster.enable=true
#zookeeper集羣地址
cluster.zookeeper.address=192.168.2.131,192.168.2.132,192.168.2.133
cluster.zookeeper.sessiontimeout=30000
#當前節點collector IP地址
cluster.listen.ip=192.168.2.131
#collector監聽的端口
cluster.listen.port=9090
[vagrant@pinpointNode1 ~]$ cat pinpoint-collector-1.6.2/webapps/ROOT/WEB-INF/classes/pinpoint-collector.properties
# tcp listen ip and port
collector.tcpListenIp=192.168.2.131
collector.tcpListenPort=9994

# number of tcp worker threads
collector.tcpWorkerThread=8
# capacity of tcp worker queue
collector.tcpWorkerQueueSize=1024
# monitoring for tcp worker
collector.tcpWorker.monitor=true

# udp listen ip and port
collector.udpStatListenIp=192.168.2.131
collector.udpStatListenPort=9995

# configure l4 ip address to ignore health check logs
collector.l4.ip=

# number of udp statworker threads
collector.udpStatWorkerThread=8
# capacity of udp statworker queue
collector.udpStatWorkerQueueSize=64
# monitoring for udp stat worker
collector.udpStatWorker.monitor=true

collector.udpStatSocketReceiveBufferSize=4194304


# span listen port ---------------------------------------------------------------------
collector.udpSpanListenIp=192.168.2.131
collector.udpSpanListenPort=9996

# type of udp spanworker type
#collector.udpSpanWorkerType=DEFAULT_EXECUTOR
# number of udp spanworker threads
collector.udpSpanWorkerThread=32
# capacity of udp spanworker queue
collector.udpSpanWorkerQueueSize=256
# monitoring for udp span worker
collector.udpSpanWorker.monitor=true

collector.udpSpanSocketReceiveBufferSize=4194304

# change OS level read/write socket buffer size (for linux)
#sudo sysctl -w net.core.rmem_max=
#sudo sysctl -w net.core.wmem_max=
# check current values using:
#$ /sbin/sysctl -a | grep -e rmem -e wmem

# number of agent event worker threads
collector.agentEventWorker.threadSize=4
# capacity of agent event worker queue
collector.agentEventWorker.queueSize=1024

statistics.flushPeriod=1000

# -------------------------------------------------------------------------------------------------
# The cluster related options are used to establish connections between the agent, collector, and web in order to send/receive data between them in real time.
# You may enable additional features using this option (Ex : RealTime Active Thread Chart).
# -------------------------------------------------------------------------------------------------
# Usage : Set the following options for collector/web components that reside in the same cluster in order to enable this feature.
# 1. cluster.enable (pinpoint-web.properties, pinpoint-collector.properties) - "true" to enable
# 2. cluster.zookeeper.address (pinpoint-web.properties, pinpoint-collector.properties) - address of the ZooKeeper instance that will be used to manage the cluster
# 3. cluster.web.tcp.port (pinpoint-web.properties) - any available port number (used to establish connection between web and collector)
# -------------------------------------------------------------------------------------------------
# Please be aware of the following:
#1. If the network between web, collector, and the agents are not stable, it is advisable not to use this feature.
#2. We recommend using the cluster.web.tcp.port option. However, in cases where the collector is unable to establish connection to the web, you may reverse this and make the web establish connection to the collector.
#   In this case, you must set cluster.connect.address (pinpoint-web.properties); and cluster.listen.ip, cluster.listen.port (pinpoint-collector.properties) accordingly.
cluster.enable=true
cluster.zookeeper.address=192.168.2.131,192.168.2.132,192.168.2.133
cluster.zookeeper.sessiontimeout=30000
cluster.listen.ip=192.168.2.131
cluster.listen.port=9090

#collector.admin.password=
#collector.admin.api.rest.active=
#collector.admin.api.jmx.active=

collector.spanEvent.sequence.limit=10000

# span.binary format compatibility = v1 or v2 or dualWrite
# span format v2 : https://github.com/naver/pinpoint/issues/1819
collector.span.format.compatibility.version=v2

# stat handling compatibility = v1 or v2 or dualWrite
# AgentStatV2 table : https://github.com/naver/pinpoint/issues/1533
collector.stat.format.compatibility.version=v2

啓動pinpoint collector

當配置完三個節點collector後即可啓動collector:

[vagrant@pinpointNode1 ~]$ cd pinpoint-collector-1.6.2/bin
[vagrant@pinpointNode1 bin]$ chmod +x catalina.sh shutdown.sh startup.sh
[vagrant@pinpointNode1 bin$ ./startup.sh

啓動Pinpoint Collector時請確保HBase、Zookeeper都是正常啓動的,否則會報錯Connection Refused。
啓動後可查看collector日誌:

[vagrant@pinpointNode1 ~]$ cd pinpoint-collector-1.6.2/logs
[vagrant@pinpointNode1 logs]$ tail -f catalina.out

安裝pinpoint web

解壓tomcat

[vagrant@pinpointNode1 ~]$ unzip apache-tomcat-8.5.14.zip
[vagrant@pinpointNode1 ~]$ mv apache-tomcat-8.5.14 pinpoint-web-1.6.2
[vagrant@pinpointNode1 ~]$ cd pinpoint-web-1.6.2/webapps
[vagrant@pinpointNode1 webapps]$ rm -rf *
[vagrant@pinpointNode1 webapps]$ unzip ~/pinpoint-web-1.6.2.war -d ROOT 

修改配置文件

修改server.xml

爲了避免端口衝突,更改tomcat端口設置:

<?xml version="1.0" encoding="UTF-8"?>
<!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
  this work for additional information regarding copyright ownership.
  The ASF licenses this file to You under the Apache License, Version 2.0
  (the "License"); you may not use this file except in compliance with
  the License.  You may obtain a copy of the License at

      http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License.
-->
<!-- Note:  A "Server" is not itself a "Container", so you may not
     define subcomponents such as "Valves" at this level.
     Documentation at /docs/config/server.html
 -->
<Server port="19005" shutdown="SHUTDOWN">
  <Listener className="org.apache.catalina.startup.VersionLoggerListener" />
  <!-- Security listener. Documentation at /docs/config/listeners.html
  <Listener className="org.apache.catalina.security.SecurityListener" />
  -->
  <!--APR library loader. Documentation at /docs/apr.html -->
  <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" />
  <!-- Prevent memory leaks due to use of particular java/javax APIs-->
  <Listener className="org.apache.catalina.core.JreMemoryLeakPreventionListener" />
  <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" />
  <Listener className="org.apache.catalina.core.ThreadLocalLeakPreventionListener" />

  <!-- Global JNDI resources
       Documentation at /docs/jndi-resources-howto.html
  -->
  <GlobalNamingResources>
    <!-- Editable user database that can also be used by
         UserDatabaseRealm to authenticate users
    -->
    <Resource name="UserDatabase" auth="Container"
              type="org.apache.catalina.UserDatabase"
              description="User database that can be updated and saved"
              factory="org.apache.catalina.users.MemoryUserDatabaseFactory"
              pathname="conf/tomcat-users.xml" />
  </GlobalNamingResources>

  <!-- A "Service" is a collection of one or more "Connectors" that share
       a single "Container" Note:  A "Service" is not itself a "Container",
       so you may not define subcomponents such as "Valves" at this level.
       Documentation at /docs/config/service.html
   -->
  <Service name="Catalina">

    <!--The connectors can use a shared executor, you can define one or more named thread pools-->
    <!--
    <Executor name="tomcatThreadPool" namePrefix="catalina-exec-"
        maxThreads="150" minSpareThreads="4"/>
    -->


    <!-- A "Connector" represents an endpoint by which requests are received
         and responses are returned. Documentation at :
         Java HTTP Connector: /docs/config/http.html
         Java AJP  Connector: /docs/config/ajp.html
         APR (HTTP/AJP) Connector: /docs/apr.html
         Define a non-SSL/TLS HTTP/1.1 Connector on port 8080
    -->
    <Connector port="19080" protocol="HTTP/1.1"
               connectionTimeout="20000"
               redirectPort="19443" />
    <!-- A "Connector" using the shared thread pool-->
    <!--
    <Connector executor="tomcatThreadPool"
               port="8080" protocol="HTTP/1.1"
               connectionTimeout="20000"
               redirectPort="8443" />
    -->
    <!-- Define a SSL/TLS HTTP/1.1 Connector on port 8443
         This connector uses the NIO implementation. The default
         SSLImplementation will depend on the presence of the APR/native
         library and the useOpenSSL attribute of the
         AprLifecycleListener.
         Either JSSE or OpenSSL style configuration may be used regardless of
         the SSLImplementation selected. JSSE style configuration is used below.
    -->
    <!--
    <Connector port="8443" protocol="org.apache.coyote.http11.Http11NioProtocol"
               maxThreads="150" SSLEnabled="true">
        <SSLHostConfig>
            <Certificate certificateKeystoreFile="conf/localhost-rsa.jks"
                         type="RSA" />
        </SSLHostConfig>
    </Connector>
    -->
    <!-- Define a SSL/TLS HTTP/1.1 Connector on port 8443 with HTTP/2
         This connector uses the APR/native implementation which always uses
         OpenSSL for TLS.
         Either JSSE or OpenSSL style configuration may be used. OpenSSL style
         configuration is used below.
    -->
    <!--
    <Connector port="8443" protocol="org.apache.coyote.http11.Http11AprProtocol"
               maxThreads="150" SSLEnabled="true" >
        <UpgradeProtocol className="org.apache.coyote.http2.Http2Protocol" />
        <SSLHostConfig>
            <Certificate certificateKeyFile="conf/localhost-rsa-key.pem"
                         certificateFile="conf/localhost-rsa-cert.pem"
                         certificateChainFile="conf/localhost-rsa-chain.pem"
                         type="RSA" />
        </SSLHostConfig>
    </Connector>
    -->

    <!-- Define an AJP 1.3 Connector on port 8009 -->
    <Connector port="19009" protocol="AJP/1.3" redirectPort="19443" />


    <!-- An Engine represents the entry point (within Catalina) that processes
         every request.  The Engine implementation for Tomcat stand alone
         analyzes the HTTP headers included with the request, and passes them
         on to the appropriate Host (virtual host).
         Documentation at /docs/config/engine.html -->

    <!-- You should set jvmRoute to support load-balancing via AJP ie :
    <Engine name="Catalina" defaultHost="localhost" jvmRoute="jvm1">
    -->
    <Engine name="Catalina" defaultHost="localhost">

      <!--For clustering, please take a look at documentation at:
          /docs/cluster-howto.html  (simple how to)
          /docs/config/cluster.html (reference documentation) -->
      <!--
      <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/>
      -->

      <!-- Use the LockOutRealm to prevent attempts to guess user passwords
           via a brute-force attack -->
      <Realm className="org.apache.catalina.realm.LockOutRealm">
        <!-- This Realm uses the UserDatabase configured in the global JNDI
             resources under the key "UserDatabase".  Any edits
             that are performed against this UserDatabase are immediately
             available for use by the Realm.  -->
        <Realm className="org.apache.catalina.realm.UserDatabaseRealm"
               resourceName="UserDatabase"/>
      </Realm>

      <Host name="localhost"  appBase="webapps"
            unpackWARs="true" autoDeploy="true">

        <!-- SingleSignOn valve, share authentication between web applications
             Documentation at: /docs/config/valve.html -->
        <!--
        <Valve className="org.apache.catalina.authenticator.SingleSignOn" />
        -->

        <!-- Access log processes all example.
             Documentation at: /docs/config/valve.html
             Note: The pattern used is equivalent to using pattern="common" -->
        <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs"
               prefix="localhost_access_log" suffix=".txt"
               pattern="%h %l %u %t &quot;%r&quot; %s %b" />

      </Host>
    </Engine>
  </Service>
</Server>

此處設置的端口是19080,當pinpoint web安裝好之後就可以在瀏覽器中通過該端口訪問web界面了。

修改hbase.properties

修改該配置文件中的hbase.client.hosthbase.client.port參數:

[vagrant@pinpointNode2 ~]$ cat pinpoint-web-1.6.2/webapps/ROOT/WEB-INF/classes/hbase.properties
#zookeeper集羣IP地址
hbase.client.host=192.168.2.131,192.168.2.132,192.168.2.133
#zookeeper端口
hbase.client.port=2181

# hbase default:/hbase
hbase.zookeeper.znode.parent=/hbase

# hbase timeout option==================================================================================
# hbase default:true
hbase.ipc.client.tcpnodelay=true
# hbase default:60000
hbase.rpc.timeout=10000
# hbase default:Integer.MAX_VALUE
hbase.client.operation.timeout=10000

# hbase socket read timeout. default: 200000
hbase.ipc.client.socket.timeout.read=20000
# socket write timeout. hbase default: 600000
hbase.ipc.client.socket.timeout.write=30000

#==================================================================================
# hbase client thread pool option
hbase.client.thread.max=64
hbase.client.threadPool.queueSize=5120
# prestartAllCoreThreads
hbase.client.threadPool.prestart=false

#==================================================================================
# hbase parallel scan options
hbase.client.parallel.scan.enable=true
hbase.client.parallel.scan.maxthreads=64
hbase.client.parallel.scan.maxthreadsperscan=16

修改pinpoint-web.properties

修改後的配置文件如下所示:

[vagrant@pinpointNode2 ~]$ cat pinpoint-web-1.6.2/webapps/ROOT/WEB-INF/classes/pinpoint-web.properties
# -------------------------------------------------------------------------------------------------
# The cluster related options are used to establish connections between the agent, collector, and web in order to send/receive data between them in real time.
# You may enable additional features using this option (Ex : RealTime Active Thread Chart).
# -------------------------------------------------------------------------------------------------
# Usage : Set the following options for collector/web components that reside in the same cluster in order to enable this feature.
# 1. cluster.enable (pinpoint-web.properties, pinpoint-collector.properties) - "true" to enable
# 2. cluster.zookeeper.address (pinpoint-web.properties, pinpoint-collector.properties) - address of the ZooKeeper instance that will be used to manage the cluster
# 3. cluster.web.tcp.port (pinpoint-web.properties) - any available port number (used to establish connection between web and collector)
# -------------------------------------------------------------------------------------------------
# Please be aware of the following:
#1. If the network between web, collector, and the agents are not stable, it is advisable not to use this feature.
#2. We recommend using the cluster.web.tcp.port option. However, in cases where the collector is unable to establish connection to the web, you may reverse this and make the web establish connection to the collector.
#   In this case, you must set cluster.connect.address (pinpoint-web.properties); and cluster.listen.ip, cluster.listen.port (pinpoint-collector.properties) accordingly.
cluster.enable=true
#pinpoint web的tcp端口(默認是9997)
cluster.web.tcp.port=9997
cluster.zookeeper.address=192.168.2.131,192.168.2.132,192.168.2.133
cluster.zookeeper.sessiontimeout=30000
cluster.zookeeper.retry.interval=60000
#對應pinpoint collector集羣IP:端口
cluster.connect.address=192.168.2.131:9090,192.168.2.132:9090,192.168.2.133:9090

# FIXME - should be removed for proper authentication
admin.password=admin

#log site link (guide url : https://github.com/naver/pinpoint/blob/master/doc/per-request_feature_guide.md)
#log.enable=false
#log.page.url=
#log.button.name=

# Configuration
# Flag to send usage information (button click counts/order) to Google Analytics
# https://github.com/naver/pinpoint/wiki/FAQ#why-do-i-see-ui-send-requests-to-httpwwwgoogle-analyticscomcollect
config.sendUsage=true
config.editUserInfo=true
config.openSource=true
config.show.activeThread=true
config.show.activeThreadDump=true
config.show.inspector.dataSource=true
config.enable.activeThreadDump=true

web.hbase.selectSpans.limit=500
web.hbase.selectAllSpans.limit=500

web.activethread.activeAgent.duration.days=7

# span.binary format compatibility = v1 or v2 or compatibilityMode
# span format v2 : https://github.com/naver/pinpoint/issues/1819
web.span.format.compatibility.version=compatibilityMode

# stat handling compatibility = v1 or v2 or compatibilityMode
# AgentStatV2 table : https://github.com/naver/pinpoint/issues/1533
web.stat.format.compatibility.version=compatibilityMode

啓動pinpoint web

[vagrant@pinpointNode2 ~]$ cd pinpoint-web-1.6.2/bin
[vagrant@pinpointNode2 bin]$ chmod +x catalina.sh shutdown.sh startup.sh
[vagrant@pinpointNode2 bin]$ ./startup.sh

啓動後可查看collector日誌:

[vagrant@pinpointNode2 ~]$ cd pinpoint-web -1.6.2/logs
[vagrant@pinpointNode2 logs]$ tail -f catalina.out

成功啓動後在瀏覽器查看web界面http://192.168.2.132:19080/

配置Nginx代理pinpoint collector


修改nginx配置文件:

[vagrant@pinpointNode1 ~]$ vim /etc/nginx/nginx.conf

添加如下內容:

stream {
        proxy_protocol_timeout 120s;
        log_format  main  '$remote_addr $remote_port - [$time_local] '
                          '$status $bytes_sent $protocol $server_addr $server_port'
                          '$proxy_protocol_addr $proxy_protocol_port';
        access_log /var/log/nginx/access.log main;

        upstream 9994_tcp_upstreams {
                #least_timefirst_byte;
                fail_timeout=15s;
                server 192.168.2.131:9994;
                server 192.168.2.132:9994;
                server 192.168.2.133:9994;
        }

        upstream 9995_udp_upstreams {
                #least_timefirst_byte;
                server 192.168.2.131:9995;
                server 192.168.2.132:9995;
                server 192.168.2.133:9995;
        }

        upstream 9996_udp_upstreams {
                #least_timefirst_byte;
                server 192.168.2.131:9996;
                server 192.168.2.132:9996;
                server 192.168.2.133:9996;
        }

        server {
                listen 19994;
                proxy_pass 9994_tcp_upstreams;
                #proxy_timeout 1s;
                proxy_connect_timeout 5s;
        }

        server {
                listen 19995 udp;
                proxy_pass 9995_udp_upstreams;
                proxy_timeout 1s;
                #proxy_responses1;

        }

        server {
                listen 19996 udp;
                proxy_pass 9996_udp_upstreams;
                proxy_timeout 1s;
                #proxy_responses1;
        }
}

添加內容後保存並退出。
使nginx配置生效:

[vagrant@pinpointNode1 ~]$ nginx -s reload

完成之後可在該節點上看到19994(tcp)、19995(udp)、19996(udp)端口被啓用,分別用於代理各個pinpoint collector節點的9994、9995、9996端口。

安裝pinpoint agent


爲了方便檢驗本次搭建的集羣環境,在pinpointNode3節點使用tomcat進行驗證。

解壓agent安裝包

[vagrant@pinpointNode3 ~]$ mkdir pinpoint-agent-1.6.2
[vagrant@pinpointNode3 ~]$ cd pinpoint-agent-1.6.2/
[vagrant@pinpointNode3 pinpoint-agent-1.6.2]$ cp ~/pinpoint-agent-1.6.2.tar.gz .
[vagrant@pinpointNode3 pinpoint-agent-1.6.2]$ tar -xzvf pinpoint-agent-1.6.2.tar.gz
[vagrant@pinpointNode3 pinpoint-agent-1.6.2]$ rm pinpoint-agent-1.6.2.tar.gz

配置pinpoint agent

修改pinpoint agent的配置文件pinpoint.config中的以下內容:

###########################################################
# Collector server                                        #
###########################################################
# 本次使用了nginx進行代理,此處改爲nginx IP地址
profiler.collector.ip=192.168.2.131

# placeHolder support "${key}"
profiler.collector.span.ip=${profiler.collector.ip}
# nginx代理collector 9996的端口
profiler.collector.span.port=19996

# placeHolder support "${key}"
profiler.collector.stat.ip=${profiler.collector.ip}
# nginx代理collector 9995的端口
profiler.collector.stat.port=19995

# placeHolder support "${key}"
profiler.collector.tcp.ip=${profiler.collector.ip}
# nginx代理collector 9994的端口
profiler.collector.tcp.port=19994

tomcat驗證

[vagrant@pinpointNode3 ~]$ unzip apache-tomcat-8.5.14.zip
[vagrant@pinpointNode3 ~]$ cd apache-tomcat-8.5.14/bin
[vagrant@pinpointNode3 bin]$ chmod +x catalina.sh shutdown.sh startup.sh
[vagrant@pinpointNode3 bin]$ vim catalina.sh

在 catalina.sh 頭部添加以下內容:

CATALINA_OPTS="$CATALINA_OPTS -javaagent:/home/vagrant/pinpoint-agent-1.6.2/pinpoint-bootstrap-1.6.2.jar"
CATALINA_OPTS="$CATALINA_OPTS -Dpinpoint.applicationName=test-tomcat"
CATALINA_OPTS="$CATALINA_OPTS -Dpinpoint.agentId=tomcat-01"

其中,第一個路徑指向pinpoint-bootstrap的jar包位置,第二個pinpoint.applicationName表示監控的目標應用的名稱,第三 個pinpoint.agentId表示監控目標應用的ID,其中pinpoint.applicationName可以不唯一,pinpoint.agentId要求唯一,如果 pinpoint.applicationName相同但pinpoint.agentId不同,則表示的是同一個應用的集羣。

啓動tomcat

[vagrant@pinpointNode3 ~]$ cd apache-tomcat-8.5.14/bin
[vagrant@pinpointNode3 bin]$ ./startup.sh

啓動完成,當你訪問tomcat後刷新pinpoint web界面即可看到應用調用情況。
安裝完成

參考文章


發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章