目錄
1.簡介
日誌主要包括系統日誌、應用程序日誌和安全日誌。系統運維和開發人員可以通過日誌瞭解服務器軟硬件信息、檢查配置過程中的錯誤及錯誤發生的原因。經常分析日誌可以瞭解服務器的負荷,性能安全性,從而及時採取措施糾正錯誤。
通常,日誌被分散的儲存不同的設備上。如果你管理數十上百臺服務器,你還在使用依次登錄每臺機器的傳統方法查閱日誌。這樣是不是感覺很繁瑣和效率低下。當務之急我們使用集中化的日誌管理,例如:開源的syslog,將所有服務器上的日誌收集彙總。
集中化管理日誌後,日誌的統計和檢索又成爲一件比較麻煩的事情,一般我們使用grep、awk和wc等Linux命令能實現檢索和統計,但是對於要求更高的查詢、排序和統計等要求和龐大的機器數量依然使用這樣的方法難免有點力不從心。
開源實時日誌分析ELK平臺能夠完美的解決我們上述的問題,ELK由ElasticSearch、Logstash和Kiabana三個開源工具組成。
l Elasticsearch是個開源分佈式搜索引擎,它的特點有:分佈式,零配置,自動發現,索引自動分片,索引副本機制,restful風格接口,多數據源,自動搜索負載等。
l Logstash是一個完全開源的工具,他可以對你的日誌進行收集、分析,並將其存儲供以後使用(如,搜索)。
l kibana 也是一個開源和免費的工具,他Kibana可以爲 Logstash 和 ElasticSearch 提供的日誌分析友好的 Web界面,可以幫助您彙總、分析和搜索重要數據日誌。
整體數據流程:
圖1.1 ELK數據流圖
因爲子系統和實例遠遠大於ES數量,所以在ES和收集日誌的logstash終端之間添加了Kafka實現緩存。
圖1.2 添加kafka實現logstash與ES緩存
日誌信息存儲量過大,ES只存儲1週數據,超過1周的數據存放HIVE。日誌大小信息如圖1.3
圖1.3 ES 普惠日誌增長量
普惠信息主要來自三個方面,針對不同的數據採用不同的方法:
安全日誌:存儲於Oracle數據庫。
Oracle---logstash----kafka-----ES----Kibana
應用日誌:以txt文檔方式存儲於應用本地目錄。
Txt----logstash(filter)----kafka----ES---Kibana
Nginx訪問日誌:使用Nginx差價Lua直接推送Kafka
Nginx+lua-----kafka----ES---kibana
1.1. Logstash
logstash是一個數據分析軟件,主要目的是分析log日誌。數據首先將數據傳給logstash,它將數據進行過濾和格式化(轉成JSON格式),然後傳給Elasticsearch進行存儲、建搜索的索引。
圖1.4 Logstash數據處理流程
1.2. ElasticSearch
ElasticSearch是一個基於Lucene的搜索服務器。它提供了一個分佈式多用戶能力的全文搜索引擎,基於RESTful web接口。Elasticsearch是用Java開發的,並作爲Apache許可條款下的開放源碼發佈,是當前流行的企業級搜索引擎。設計用於雲計算中,能夠達到實時搜索,穩定,可靠,快速,安裝使用方便。
1.3. Kibana
Kibana是一個開源的分析與可視化平臺,設計出來用於和Elasticsearch一起使用的。可進行搜索、查看、交互存放在Elasticsearch索引裏的數據,使用各種不同的圖表、表格、地圖等kibana能夠很輕易地展示高級數據分析與可視化。
1.4. Kafka
Kafka是一種高吞吐量的分佈式發佈訂閱消息系統,它可以處理消費者規模的網站中的所有動作流數據。 這種動作(網頁瀏覽,搜索和其他用戶的行動)是在現代網絡上的許多社會功能的一個關鍵因素。這些數據通常是由於吞吐量的要求而通過處理日誌和日誌聚合來解決。 對於像Hadoop的一樣的日誌數據和離線分析系統,但又要求實時處理的限制,這是一個可行的解決方案。Kafka的目的是通過Hadoop的並行加載機制來統一線上和離線的消息處理,也是爲了通過集羣來提供實時的消費。
1.5. Zookeeper
ZooKeeper是一個分佈式的,開放源碼的分佈式應用程序協調服務,是Google的Chubby一個開源的實現,是Hadoop和Hbase的重要組件。它是一個爲分佈式應用提供一致性服務的軟件,提供的功能包括:配置維護、域名服務、分佈式同步、組服務等。
圖1.5 Zookeeper 工作原理
1.6. Lua
Lua 是一個小巧的腳本語言。該語言的設計目的是爲了嵌入應用程序中,從而爲應用程序提供靈活的擴展和定製功能。Lua腳本可以很容易的被C/C++代碼調用,也可以反過來調用C/C++的函數,這使得Lua在應用程序中可以被廣泛應用。不僅僅作爲擴展腳本,也可以作爲普通的配置文件,代替XML,Ini等文件格式,並且更容易理解和維護。
1.7. Docker
Docker 是一個開源的應用容器引擎,讓開發者可以打包他們的應用以及依賴包到一個可移植的容器中,然後發佈到任何流行的 Linux 機器上,也可以實現虛擬化。容器是完全使用沙箱機制,相互之間不會有任何接口
2.本機環境搭建
ELK 所有組件均只依賴Java 就可以運行,所以在部署方面相對比較簡單。
本章先介紹使用本機直接安裝ELK,再建此環境移植到Docker環境,待後期使用Mesos或Openshift統一調用管理。
生產環境無法聯網,請自行下載到本地目錄後上傳,科技有封裝好的ZK和Kafka,可以考慮使用。
組件多數依賴Java,先安裝Java再安裝其他組件,各組件列表及其下載地址:
JDK
http://www.oracle.com/technetwork/java/javase/downloads/index.html
ElasticSearch,Logstash,Kibana,X-pack
https://www.elastic.co/cn/downloads
Kafka:
http://kafka.apache.org/downloads
Zookeeper
https://www.apache.org/dyn/closer.cgi/zookeeper/
本地資源:
http://http://mirrors-ph.paic.com.cn//repo/elk/
本地環境:測試環境。
直接將各組件搭建在本地機器,測試其可用性。
生產環境:
Es-node01 | 192.168.0.1 | 8核/32.0GB | Centos7.2 | elastic | ES-node01,ansible |
Es-node02 | 192.168.0.2 | 8核/32.0GB | Centos7.2 | elastic | ES-node02 |
Es-node03 | 192.168.0.3 | 8核/32.0GB | Centos7.2 | elastic | ES-node03 |
Es-node04 | 192.168.0.4 | 8核/32.0GB | Centos7.2 | elastic | ES-node04 |
Es-node05 | 192.168.0.5 | 8核/32.0GB | Centos7.2 | elastic | ES-node05 |
2.1. 系統初始化
系統初始化:
添加組:方便統一,添加組id
groupadd -g 4567 elk ansible kafka -m shell -a 'groupadd -g 4567 elk ' |
添加用戶:
useradd -G elk -u 1234 kafka ansible kafka -m shell -a 'useradd -G elk -u 1234 kafka ' |
初始化密碼:
echo ***** | passwd kafka --stdin ansible kafka -m shell -a 'echo ****** | passwd kafka --stdin ' ansible es -m shell -a 'echo ****** | passwd elastic --stdin ' ansible logstash -m shell -a 'echo ****** | passwd logstash --stdin ' |
創建kafka數據/日誌路徑:
mkdir /mnt/data{1..4}/kafka/data -p ansible kafka -m shell -a 'mkdir /mnt/data{1..4}/kafka/data -p ' ansible kafka -m shell -a 'mkdir /mnt/data1/kafka/logs -p ' |
創建zookeeper數據/日誌路徑:
mkdir /mnt/data1/zookeeper/logs -p ansible kafka -m shell -a 'mkdir /mnt/data1/zookeeper/logs -p ' ansible kafka -m shell -a 'mkdir /mnt/data{1..4}/zookeeper/data -p ' |
創建軟件路徑:
mkdir /var/soft -p ansible kafka -m shell -a ' mkdir /var/soft –p’ |
2.2. JDK 安裝
1. 遠程登錄服務器,創建業務用戶:
$ ssh [email protected] [root@es-node01 ~]# groupadd elk [root@es-node01 ~]# useradd elastic –b /wls/ -G elk [root@es-node01 ~]# echo ****** |passwd elastic --stdin 更改用戶 elastic 的密碼 。 passwd: 所有的身份驗證令牌已經成功更新。 |
重複以上動作,在所有主機添加用戶。
2. 切換用戶。
[root@es-node01 ~]# su - elastic |
3. JDK安裝
下載並上傳jdk包至服務器。因爲數據包過大,請使用跳板機上傳。此處省略過程。
[root@es-node01 ~]# wget -O /var/soft/jdk1.8.0_131.tar.gz http://http://mirrors-ph.paic.com.cn//repo/elk/jdk1.8.0_131.tar.gz |
進入安裝目錄,解壓包,並設置PATH路徑
[root@es-node01 ~]$ cd /var/soft [root@es-node01 ~]$ tar –zxvf jdk1.8.0_131.tar.gz [root@es-node01 ~]$ vim ~/.bash_profile #修改PATH路徑,並設置開機運行 JAVA_HOME=/wls/elk/jdk1.8.0_131 PATH=$JAVA_HOME/binPATH [root@es-node01 ~]$ source ~/.bash_profile [root@es-node01 ~]$ java –version #確認Java版本是否可用 java version "1.8.0_131" Java(TM) SE Runtime Environment (build 1.8.0_131-b11) Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode) [root@es-node01 ~]$ which java #確認是否爲Java 路徑 /wls/elk/jdk1.8.0_131/bin/java [root@es-node01 ~]$ |
2.3. ElasticSearch
ElasticSearch只依賴JAVA,直接解壓,修改配置就好。
ES有很多可用插件,但是離線真的很難安裝,直接放棄,對於我們這種專業的運維,直接使用curl操作。
1. 上傳解壓
[root@es-node01 ~]$ wget -O /var/soft/elasticsearch-6.0.0.tar.gz http://http://mirrors-ph.paic.com.cn//repo/elk/elasticsearch-6.0.0.tar.gz [root@es-node01 ~]$ cd /var/soft [root@es-node01 ~]$ tar -zxvf tools/elasticsearch-6.0.0.tar.gz |
2. 修改配置
可以參考配置文件:
http://http://mirrors-ph.paic.com.cn//repo/elk/conf/elasticsearch.yml
查看系統配置,存儲數據地址:
創建文件位置
[root@es-node01 soft]#df -Th /dev/mapper/VolGroup1-LVdata1 xfs 500G 33M 500G 1% /mnt/data2 /dev/mapper/VolGroup2-LVdata2 xfs 500G 33M 500G 1% /mnt/data1 [root@es-node01 soft]#mkdir /mnt/data{1..2}/elasticsearch/data –p [root@es-node01 soft]#mkdir /mnt/data1/elasticsearch/logs –p [root@es-node01 soft]#chown elastic:elk /mnt/data* -R |
修改配置:
[root@es-node01 soft]#cat /config/elasticsearch.yml cluster.name: ph-elk #集羣名 node.name: es-node01 #主機名 path.data: /mnt/data1/elasticsearch/data,/mnt/data2/elasticsearch/data #數據路徑,多文件用逗號 path.logs: /mnt/data1/elasticsearch/logs #log 路徑 network.host: 0.0.0.0 #可訪問機器 http.port: 9200 #端口 discovery.zen.ping.unicast.hosts: ["192.168.0.1","192.168.0.2","192.168.0.3","192.168.0.4","192.168.0.5"] #其他機器 xpack.ssl.key: certs/${node.name}/${node.name}.key #x-pack 認證證書路徑 xpack.ssl.certificate: certs/${node.name}/${node.name}.crt #機器私鑰 xpack.ssl.certificate_authorities: certs/ca/ca.crt #集羣CA證書 xpack.security.transport.ssl.enabled: true #開啓xpack功能 |
Jvm 配置根據機器配置修改:
[root@es-node01 soft]#cat config/jvm.options -Xms28g -Xmx28g |
3. 啓動驗證
[root@es-node01 soft]# bin/elasticsearch [2018-03-29T11:02:42,296][INFO ][o.e.b.BootstrapChecks ] [es-node01] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks ERROR: [1] bootstrap checks failed [1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144] |
此報錯使用root修改vm.max_map_count
[root@es-node01 soft]# sysctl -w vm.max_map_count=20000000 vm.max_map_count = 20000000 |
再次啓動
[root@es-node01 soft]# bin/elasticsearch –d [2018-03-29T11:06:32,320][INFO ][o.e.n.Node ] [es-node01] initializing ... [2018-03-29T11:06:32,390][INFO ][o.e.e.NodeEnvironment ] [es-node01] using [2] data paths, mounts [[/mnt/data1 (/dev/mapper/VolGroup2-LVdata2), /mnt/data2 (/dev/mapper/VolGroup1-LVdata1)]], net usable_space [999.4gb], net total_space [999.5gb], types [xfs] [2018-03-29T11:06:32,391][INFO ][o.e.e.NodeEnvironment ] [es-node01] heap size [27.9gb], compressed ordinary object pointers [true] [2018-03-29T11:06:32,392][INFO ][o.e.n.Node ] [es-node01] node name [es-node01], node ID [6o6WyURiRdeargHwqbGWag] [2018-03-29T11:06:32,392][INFO ][o.e.n.Node ] [es-node01] version[6.0.0], pid[48441], build[8f0685b/2017-11-10T18:41:22.859Z], OS[Linux/3.10.0-693.17.1.el7.x86_64/amd64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_131/25.131-b11] [2018-03-29T11:06:32,392][INFO ][o.e.n.Node ] [es-node01] JVM arguments [-Xms28g, -Xmx28g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -XX:+HeapDumpOnOutOfMemoryError, -Des.path.home=/var/soft/elasticsearch-6.0.0, -Des.path.conf=/var/soft/elasticsearch-6.0.0/config] [2018-03-29T11:06:33,353][INFO ][o.e.p.PluginsService ] [es-node01] loaded module [aggs-matrix-stats] [2018-03-29T11:06:33,353][INFO ][o.e.p.PluginsService ] [es-node01] loaded module [analysis-common] [2018-03-29T11:06:33,354][INFO ][o.e.p.PluginsService ] [es-node01] loaded module [ingest-common] [2018-03-29T11:06:33,354][INFO ][o.e.p.PluginsService ] [es-node01] loaded module [lang-expression] [2018-03-29T11:06:33,354][INFO ][o.e.p.PluginsService ] [es-node01] loaded module [lang-mustache] [2018-03-29T11:06:33,354][INFO ][o.e.p.PluginsService ] [es-node01] loaded module [lang-painless] [2018-03-29T11:06:33,354][INFO ][o.e.p.PluginsService ] [es-node01] loaded module [parent-join] [2018-03-29T11:06:33,354][INFO ][o.e.p.PluginsService ] [es-node01] loaded module [percolator] [2018-03-29T11:06:33,354][INFO ][o.e.p.PluginsService ] [es-node01] loaded module [reindex] [2018-03-29T11:06:33,354][INFO ][o.e.p.PluginsService ] [es-node01] loaded module [repository-url] [2018-03-29T11:06:33,354][INFO ][o.e.p.PluginsService ] [es-node01] loaded module [transport-netty4] [2018-03-29T11:06:33,354][INFO ][o.e.p.PluginsService ] [es-node01] loaded module [tribe] [2018-03-29T11:06:33,355][INFO ][o.e.p.PluginsService ] [es-node01] loaded plugin [x-pack] |
驗證:
[root@kafka-node5 ~]# curl http://192.168.0.1:9200 { "name" : "es-node01", "cluster_name" : "ph-elk", "cluster_uuid" : "x2oARQMlQoqMPTYXT4MeGw", "version" : { "number" : "6.0.0", "build_hash" : "8f0685b", |
4. 安裝x-pack
[root@es-node01 soft]# wget http://http://mirrors-ph.paic.com.cn//repo/elk/x-pack-6.0.0.zip [root@es-node01 soft]# bin/elasticsearch-plugin install file:///var/soft/elasticsearch-6.0.0/x-pack-6.0.0.zip -> Downloading file:///var/soft/elasticsearch-6.0.0/x-pack-6.0.0.zip [=================================================] 100%聽聽 @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: plugin requires additional permissions @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ * java.io.FilePermission \\.\pipe\* read,write * java.lang.RuntimePermission accessClassInPackage.com.sun.activation.registries * java.lang.RuntimePermission getClassLoader * java.lang.RuntimePermission setContextClassLoader * java.lang.RuntimePermission setFactory * java.net.SocketPermission * connect,accept,resolve * java.security.SecurityPermission createPolicy.JavaPolicy * java.security.SecurityPermission getPolicy * java.security.SecurityPermission putProviderProperty.BC * java.security.SecurityPermission setPolicy * java.util.PropertyPermission * read,write * java.util.PropertyPermission sun.nio.ch.bugLevel write See http://docs.oracle.com/javase/8/ ... ty/permissions.html for descriptions of what these permissions allow and the associated risks. Continue with installation? [y/N]y @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: plugin forks a native controller @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ This plugin launches a native controller that is not subject to the Java security manager nor to system call filters. Continue with installation? [y/N]y |
5. 生成認證祕鑰
[elastic@es-node01 x-pack]$ bin/x-pack/certgen Please enter the desired output file [certificate-bundle.zip]: ph-elk Enter instance name: es-node01 Enter name for directories and files [es-node01]: Enter IP Addresses for instance (comma-separated if more than one) []: 192.168.0.1 |
6. 解壓祕鑰
[elastic@es-node01 config]$ mkdir certs [elastic@es-node01 config]$ cd certs/ [elastic@es-node01 certs]$ unzip ../ph-elk.zip Archive: ../ph-elk.zip creating: ca/ inflating: ca/ca.crt inflating: ca/ca.key creating: es-node01/ inflating: es-node01/es-node01.crt inflating: es-node01/es-node01.key creating: es-node02/ inflating: es-node02/es-node02.crt inflating: es-node02/es-node02.key creating: es-node03/ inflating: es-node03/es-node03.crt inflating: es-node03/es-node03.key creating: es-node04/ inflating: es-node04/es-node04.crt inflating: es-node04/es-node04.key creating: es-node05/ inflating: es-node05/es-node05.crt inflating: es-node05/es-node05.key |
7. 密碼初始化
[root@es-node01 soft]#cd bin/x-pack [elastic@es-node01 x-pack]$ ./setup-passwords interactive Initiating the setup of reserved user elastic,kibana,logstash_system passwords. You will be prompted to enter passwords as the process progresses. Please confirm that you would like to continue [y/N]y Enter password for [elastic]: Reenter password for [elastic]: Enter password for [kibana]: Reenter password for [kibana]: Enter password for [logstash_system]: Reenter password for [logstash_system]: Changed password for user [kibana] Changed password for user [logstash_system] Changed password for user [elastic] [elastic@es-node01 x-pack]$ |
重啓驗證。恭喜恭喜
[elastic@es-node01 x-pack]$ curl -u elastic http://127.0.0.1:9200?pretty Enter host password for user 'elastic':******* { "name" : "es-node01", "cluster_name" : "ph-elk", "cluster_uuid" : "x2oARQMlQoqMPTYXT4MeGw", "version" : { "number" : "6.0.0", "build_hash" : "8f0685b", "build_date" : "2017-11-10T18:41:22.859Z", "build_snapshot" : false, "lucene_version" : "7.0.1", "minimum_wire_compatibility_version" : "5.6.0", "minimum_index_compatibility_version" : "5.0.0" }, "tagline" : "You Know, for Search" } |
2.4. Kibana
Kibana 安裝依賴java,下載安裝即可。
[root@es-node01 soft]# wget -O /var/soft/kibana-6.0.0-linux-x86_64.tar.gz http://http://mirrors-ph.paic.com.cn//repo/elk/kibana-6.0.0-linux-x86_64.tar.gz --2018-03-29 13:13:42-- http://http://mirrors-ph.paic.com.cn//repo/elk/kibana-6.0.0-linux-x86_64.tar.gz 正在連接 http://mirrors-ph.paic.com.cn/:80... 已連接。 已發出 HTTP 請求,正在等待迴應... 200 OK 長度:62681301 (60M) [application/octet-stream] 正在保存至: “/var/soft/kibana-6.0.0-linux-x86_64.tar.gz” 100%[================================================================================================================================>] 62,681,301 36.1MB/s 用時 1.7s 2018-03-29 13:13:44 (36.1 MB/s) - 已保存 “/var/soft/kibana-6.0.0-linux-x86_64.tar.gz” [62681301/62681301]) [root@es-node01 soft]# |
解壓
[root@es-node01 soft]# cd /var/soft/ [root@es-node01 soft]# tar zxv kibana-6.0.0-linux-x86_64.tar.gz |
修改配置
[root@es-node01 soft]# cat ../config/kibana.yml|egrep -v "^#|$?" server.port: "5601" server.host: "0.0.0.0" server.name: "es-node01" elasticsearch.url: "http://127.0.0.1:9200" elasticsearch.username: "kibana" elasticsearch.password: "kibana" |
啓動驗證
[root@es-node01 soft]# bin/kibana |
訪問頁面,恭喜恭喜:http://kibana_server:5601
2.5. zookeeper
服務器列表:
kafka-node1 | 192.168.0.6 | 8核/32.0GB | Centos7.2 | Kafka-node1 |
kafka-node2 | 192.168.0.7 | 8核/32.0GB | Centos7.2 | Kafka-node2 |
kafka-node3 | 192.168.0.8 | 8核/32.0GB | Centos7.2 | Kafka-node3 |
kafka-node4 | 192.168.0.9 | 8核/32.0GB | Centos7.2 | Kafka-node4 |
kafka-node5 | 192.168.0.10 | 8核/32.0GB | Centos7.2 | Kafka-node5 |
安裝jdk,參考以前步驟。
ansible kafka -m shell -a 'wget -O /var/soft/jdk1.8.0_131.tar.gz http://http://mirrors-ph.paic.com.cn//repo/elk/jdk1.8.0_131.tar.gz' ansible kafka -m shell -a 'cd /var/soft ; tar zxf jdk1.8.0_131.tar.gz' |
下載zookeeper包:
URL: http://http://mirrors-ph.paic.com.cn//repo/elk/zookeeper-3.4.6.tar.gz
wget -O /var/soft/zookeeper-3.4.6.tar.gz http://http://mirrors-ph.paic.com.cn//repo/elk/zookeeper-3.4.6.tar.gz wget -O /var/soft/kafka_2.11-0.10.2.1.tar.gz http://http://mirrors-ph.paic.com.cn//repo/elk/kafka_2.11-0.10.2.1.tar.gz |
解壓:
ansible kafka -m shell -a 'cd /var/soft; tar xf zookeeper-3.4.6.tar.gz' ansible kafka -m shell -a 'cd /var/soft; tar zxf kafka_2.11-0.10.2.1.tar.gz' |
修改配置:
[root@es-node01 conf]# cat conf/zoo.cfg |egrep -v "^#|^$" tickTime=2000 initLimit=10 syncLimit=5 dataDir=/mnt/data1/zookeeper/data dataLogDir=/mnt/data1/zookeeper/logs clientPort=2181 maxClientCnxns=300 autopurge.snapRetainCount=20 autopurge.purgeInterval=48 server.1=192.168.0.6:2888:3888 server.2=192.168.0.7:2888:3888 server.3=192.168.0.8:2888:3888 server.4=192.168.0.9:2888:3888 server.5=192.168.0.10:2888:3888 |
同步配置文檔:使用ansible統一分發(方便)。
[root@es-node01 conf]# ansible kafka -m copy -a "src=/var/soft/zookeeper-3.4.6/conf/zoo.cfg dest=/var/soft/zookeeper-3.4.6/conf/" kafka-node3 | SUCCESS => { "changed": true, "checksum": "3c05364115536c3b645bc7222df883b341d50c83", "dest": "/var/soft/zookeeper-3.4.6/conf/zoo.cfg", "gid": 0, "group": "root", "md5sum": "ded548de08fa7afda587162df935cdcd", "mode": "0644", "owner": "root", "size": 1163, "src": "/root/.ansible/tmp/ansible-tmp-1522724739.95-275544206529433/source", "state": "file", "uid": 0 } |
修改權限
[root@es-node01 conf]# ansible kafka -m shell -a 'chown kafka:elk /var/soft' [WARNING]: Consider using file module with owner rather than running chown kafka-node3 | SUCCESS | rc=0 >> kafka-node5 | SUCCESS | rc=0 >> kafka-node4 | SUCCESS | rc=0 >> kafka-node2 | SUCCESS | rc=0 >> kafka-node1 | SUCCESS | rc=0 >> [root@es-node01 conf]# ansible kafka -m shell -a 'chown kafka:elk /mnt/data* -R ' |
啓動查看
[root@es-node01 conf]# ansible kafka -m shell -u kafka -a '/var/soft/zookeeper-3.4.6/bin/zkServer.sh start ' [root@es-node01 conf]# ansible kafka -m shell -u kafka -a '/var/soft/zookeeper-3.4.6/bin/zkServer.sh status' |
如下報錯:
[kafka@kafka-node1 ~]$ cat zookeeper.out 2018-04-03 11:19:53,751 [myid:] - INFO [mainuorumPeerConfig@103] - Reading configuration from: /var/soft/zookeeper-3.4.6/bin/../conf/zoo.cfg 2018-04-03 11:19:53,760 [myid:] - INFO [mainuorumPeerConfig@340] - Defaulting to majority quorums 2018-04-03 11:19:53,763 [myid:] - ERROR [mainuorumPeerMain@85] - Invalid config, exiting abnormally org.apache.zookeeper.server.quorum.QuorumPeerConfig$ConfigException: Error processing /var/soft/zookeeper-3.4.6/bin/../conf/zoo.cfg at org.apache.zookeeper.server.quorum.QuorumPeerConfig.parse(QuorumPeerConfig.java:123) at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:101) at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:78) Caused by: java.lang.IllegalArgumentException: /mnt/data1/zookeeper/data/myid file is missing at org.apache.zookeeper.server.quorum.QuorumPeerConfig.parseProperties(QuorumPeerConfig.java:350) at org.apache.zookeeper.server.quorum.QuorumPeerConfig.parse(QuorumPeerConfig.java:119) ... 2 more |
添加對應的ID號到此文件:
ServerName | IP | MYID |
kafka-node1 | 192.168.0.6 | 1 |
kafka-node2 | 192.168.0.7 | 2 |
kafka-node3 | 192.168.0.8 | 3 |
kafka-node4 | 192.168.0.9 | 4 |
kafka-node5 | 192.168.0.10 | 5 |
更新myid
ansible kafka-node1 -m shell -u kafka -a 'echo 1 >/mnt/data1/zookeeper/data/myid' ansible kafka-node2 -m shell -u kafka -a 'echo 2 >/mnt/data1/zookeeper/data/myid' ansible kafka-node3 -m shell -u kafka -a 'echo 3 >/mnt/data1/zookeeper/data/myid' ansible kafka-node4 -m shell -u kafka -a 'echo 4 >/mnt/data1/zookeeper/data/myid' ansible kafka-node5 -m shell -u kafka -a 'echo 5 >/mnt/data1/zookeeper/data/myid' |
重啓驗證:
一臺leader其他均爲follower即可。
[root@es-node01 ~]# ansible kafka -m shell -u kafka -a '/var/soft/zookeeper-3.4.6/bin/zkServer.sh restart ' [root@es-node01 ~]# ansible kafka -m shell -u kafka -a '/var/soft/zookeeper-3.4.6/bin/zkServer.sh status ' kafka-node5 | SUCCESS | rc=0 >> Mode: leaderJMX enabled by default Using config: /var/soft/zookeeper-3.4.6/bin/../conf/zoo.cfg kafka-node4 | SUCCESS | rc=0 >> Mode: followerJMX enabled by default Using config: /var/soft/zookeeper-3.4.6/bin/../conf/zoo.cfg kafka-node3 | SUCCESS | rc=0 >> Mode: followerJMX enabled by default Using config: /var/soft/zookeeper-3.4.6/bin/../conf/zoo.cfg kafka-node2 | SUCCESS | rc=0 >> Mode: followerJMX enabled by default Using config: /var/soft/zookeeper-3.4.6/bin/../conf/zoo.cfg kafka-node1 | SUCCESS | rc=0 >> Mode: followerJMX enabled by default Using config: /var/soft/zookeeper-3.4.6/bin/../conf/zoo.cfg |
2.6. kafka
和zk同樣的五臺機器。
下載解壓:
wget -O /var/soft/kafka_2.11-0.10.2.1.tar.gz http://http://mirrors-ph.paic.com.cn//repo/elk/kafka_2.11-0.10.2.1.tar.gz |
解壓,修改配置文檔:
以下標記部分配置每臺不同
[root@es-node01 config]# cat server.properties |egrep -v "^$|^#" broker.id=1 delete.topic.enable=true host.name=kafka-node1 listeners=PLAINTEXT:/192.168.0.6/:9092 advertised.listeners=PLAINTEXT://192.168.0.6:9092 num.network.threads=3 num.io.threads=8 socket.send.buffer.bytes=102400 socket.receive.buffer.bytes=102400 socket.request.max.bytes=104857600 log.dirs=/mnt/data1/kafka/data,/mnt/data2/kafka/data,/mnt/data3/kafka/data,/mnt/data4/kafka/data num.partitions=3 num.recovery.threads.per.data.dir=1 log.flush.interval.messages=10000 log.flush.interval.ms=1000 log.retention.hours=72 log.segment.bytes=1073741824 log.retention.check.interval.ms=300000 min.insync.replicas=2 num.replica.fetchers=2 zookeeper.connect=192.168.0.6:2181,192.168.0.7:2181,192.168.0.8:2181,192.168.0.9:2181,192.168.0.10:2181 zookeeper.connection.timeout.ms=6000 |
調整JVM參數,開啓JMX監控功能。
[root@es-node01 config]# vim bin/kafka-server-start.sh if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then export KAFKA_HEAP_OPTS="-Xmx24G -Xms24G" export JMX_PORT="9999" |
同步配置到其他機器:記得修改上面×××部分配置。
[root@kafka-node1 bin]# ansible kafka -m copy -a "src=/var/soft/kafka_2.11-0.10.2.1/config/server.properties dest=/var/soft/kafka_2.11-0.10.2.1/config/ owner=kafka group=elk" [root@kafka-node1 bin]# ansible kafka -m copy -a "src=/var/soft/kafka_2.11-0.10.2.1/bin/kafka-server-start.sh dest=/var/soft/kafka_2.11-0.10.2.1/bin/ owner=kafka group=elk" |
啓動查看:
[root@es-node01 config]# ansible kafka -m shell -u kafka -a "/var/soft/kafka_2.11-0.10.2.1/bin/kafka-server-start.sh -daemon /var/soft/kafka_2.11-0.10.2.1/config/server.properties" kafka-node3 | SUCCESS | rc=0 >> kafka-node4 | SUCCESS | rc=0 >> kafka-node5 | SUCCESS | rc=0 >> kafka-node2 | SUCCESS | rc=0 >> kafka-node1 | SUCCESS | rc=0 >> |
進入一臺kafka服務器,測試kafka是否可用。
創建topic
[kafka@kafka-node1 var]$ kafka-topics.sh -create –zookeeper 192.168.0.6:2181,192.168.0.7:2181,192.168.0.8:2181,192.168.0.9:2181,192.168.0.10:2181 -topic ELK.TEST -replication-factor 2 -partition 3 |
查看topic列表
kafka@kafka-node1 var]kafka-topics.sh -zookeeper 192.168.0.6:2181,192.168.0.7:2181,192.168.0.8:2181,192.168.0.9:2181,192.168.0.10:2181 -list ELK.TEST |
創建productor
[kafka@kafka-node1 var]$ kafka-console-producer.sh --broker-list 192.168.0.6:9092,192.168.0.7:9092,192.168.0.8:9092,192.168.0.9:9092,192.168.0.10:9092 --topic ELK.TEST |
創建consumer
開另外一臺機器開啓consumer,並在productor 輸入信息,看是否能夠同步
[kafka@kafka-node5 ~]$ kafka-console-consumer.sh --zookeeper 192.168.0.6:2181,192.168.0.7:2181,192.168.0.8:2181,192.168.0.9:2181,192.168.0.10:2181 --topic ELK.TEST Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper]. aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaa aaaaaaaaaa |
刪除topic
kafka-topics.sh --delete --zookeeper 192.168.0.6:2181,192.168.0.7:2181,192.168.0.8:2181,192.168.0.9:2181,192.168.0.10:2181 --topic ELK.TEST |
查看topic內容
kafka-console-consumer.sh --bootstrap-server 192.168.0.6:9092,192.168.0.7:9092,192.168.0.8:9092,192.168.0.9:9092,192.168.0.10:9092 --topic ELK.TEST --from-beginning --max-messages 10000 |
查看消費情況
kafka-run-class.sh kafka.tools.ConsumerOffsetChecker --group PH-SAFE --topic ELK.TEST --zookeeper 192.168.0.6:2181,192.168.0.7:2181,192.168.0.8:2181,192.168.0.9:2181,192.168.0.10:2181 |
以上操作均無問題,那麼kafka、ZK 搭建OK。恭喜恭喜。
2.7logstash
Kafka-----ES 的logstash 使用docker去運行。已經做好docker images。修改配置,運行就好。
主機列表:
SZC-L0093638 | 30.18.32.231 | 8核/32.0GB | LINUX | Logstash-1 |
SZC-L0093637 | 30.18.32.230 | 8核/32.0GB | LINUX | Logstash-2 |
運行命令:
docker run -d -it \ -h ELK-TEST \ --name ELK-TEST \ --restart=on-failure:2 \ --net=host \ -v /etc/ http://mirrors-ph.paic.com.cn/:5 ... pack_6.0.0:20171212 bash Bash 可以更換爲啓動命令。 logstash -w 1 -f /usr/local/logstash-6.0.0/config/ph-safe.conf |
配置參考http://http://mirrors-ph.paic.com.cn//repo/elk/conf/
3.Docker環境搭建
雲主機名稱 | 內網IP | 配置 | 操作系統 | 用戶 | 安裝組件 |
SZC-L0080585 | 30.18.32.54 | 8核/32.0GB | LINUX7.2 | root | es-elk-1,zk-elk-1,kafka-elk-1,kabana-elk-1 |
SZC-L0080587 | 30.18.32.55 | 8核/32.0GB | LINUX7.2 | root | es-elk-2,zk-elk-2,kafka-elk-2,kabana-elk-2 |
SZC-L0080583 | 30.18.32.56 | 8核/32.0GB | LINUX7.2 | root | es-elk-3,zk-elk-3,kafka-elk-3 |
SZC-L0080586 | 30.18.32.58 | 8核/32.0GB | LINUX7.2 | root | es-elk-4,zk-elk-4,kafka-elk-4 |
SZC-L0080584 | 30.18.32.59 | 8核/32.0GB | LINUX7.2 | root | es-elk-5,zk-elk-5,kafka-elk-5 |
4.案例4.1. logstash獲取應用日誌。
後續更新
4.2. logstash獲取Oracle數據:員工行爲分析
4.3. Nginx+Lua 獲取網頁訪問記錄
附錄Ⅰ配置列表
附錄Ⅱ x-pack破解
x-pack 破解方法,親測有效,按照步驟來就可以。
1.下載x-pack-6.0.0.zip 包
2.創建一個文件夾,解壓壓縮包。
elasticsearch]# #mkdir /x-pack elasticsearch]# #cd /x-pack elasticsearch]# #unzip ../x-pack-6.0.0.zip elasticsearch]# #cd elasticsearch elasticsearch]# file x-pack-6.0.0.jar x-pack-6.0.0.jar: Zip archive data, at least v1.0 to extract elasticsearch]# mkdir x-pack-6.0.0 elasticsearch]#cd x-pack-6.0.0 x-pack-6.0.0]#unzip ../x-pack-6.0.0.jar |
4. 尋找license判定代碼:LicenseVerifier.class。
x-pack-6.0.0]# find -name LicenseVerifier.class ./org/elasticsearch/license/LicenseVerifier.class |
5. 找一個目錄創建LicenseVerifier.java
#mkdir /tmp/test #cd /tmp/test/ #vim LicenseVerifier.java #別人已經反編譯成功,拿來用就可以。 |
package org.elasticsearch.license; public class LicenseVerifier { public static boolean verifyLicense(final License license, final byte[] encryptedPublicKeyData) { return true; } public static boolean verifyLicense(final License license) { return true; } } |
6. 編譯。我的安裝路徑爲/usr/local/elasticsearch-6.0.0
javac -cp "/usr/local/elasticsearch-6.0.0/lib/elasticsearch-6.0.0.jar:/usr/local/elasticsearch-6.0.0/lib/lucene-core-7.0.1.jar:/usr/local/elasticsearch-6.0.0/plugins/x-pack/x-pack-6.0.0.jar" LicenseVerifier.java |
7. 更新代碼包並替換。
#rm /usr/local/elasticsearch-6.0.0/plugins/x-pack/x-pack-6.0.0.jar #rm /x-pack/elasticsearch/x-pack-6.0.0.jar #rm /x-pack/elasticsearch/x-pack-6.0.0/org /elasticsearch/license/LicenseVerifier.class #cp /tmp/test/LicenseVerifier.class /x-pack-6.0.0/x-pack-6.0.0/org/elasticsearch/license/ |
8. 重新打包壓縮
#cd /x-pack/x-pack-6.0.0/ #jar -cvf x-pack-6.6.0.jar ./* |
9. 覆蓋確認:查看修改時間是否OK。
#cp x-pack-6.6.0.jar /usr/local/elasticserver-6.0.0/plugins/x-pack/x-pack-6.6.0.jar #ls –al /usr/local/elasticserver-6.0.0/plugins/x-pack/x-pack-6.6.0.jar |
10. 查看_license
$ curl -u elastic:******* http://127.0.0.1:9200/_xpack/license { "license" : { "status" : "active", "uid" : "b7d9fe72-926a-453d-bba4-1932b7c2d6a8", "type" : "trial", "issue_date" : "2018-03-29T03:06:40.711Z", "issue_date_in_millis" : 1522292800711, "expiry_date" : "2018-04-28T03:06:40.711Z", "expiry_date_in_millis" : 1524884800711, "max_nodes" : 1000, "issued_to" : "ph-elk", "issuer" : "elasticsearch", "start_date_in_millis" : -1 } } |
10.重啓elastic和kibana。
11.官網申請license
訪問網站:https://license.elastic.co/registration
{"license":{"uid":"c127f207-c8f6-4d71-8b89-21f350f7d284","type":"platinum","issue_date_in_millis":1514160000000,"expiry_date_in_millis":2524579200999,"max_nodes":100,"issued_to":"Maoshu Ran (Pingan)","issuer":"Web Form","signature":"AAAAAwAAAA3nC1a1H/RvS9soHXxIAAABmC9ZN0hjZDBGYnVyRXpCOW5Bb3FjZDAxOWpSbTVoMVZwUzRxVk1PSmkxaktJRVl5MUYvUWh3bHZVUTllbXNPbzBUemtnbWpBbmlWRmRZb25KNFlBR2x0TXc2K2p1Y1VtMG1UQU9TRGZVSGRwaEJGUjE3bXd3LzRqZ05iLzRteWFNekdxRGpIYlFwYkJiNUs0U1hTVlJKNVlXekMrSlVUdFIvV0FNeWdOYnlESDc3MWhlY3hSQmdKSjJ2ZTcvYlBFOHhPQlV3ZHdDQ0tHcG5uOElCaDJ4K1hob29xSG85N0kvTWV3THhlQk9NL01VMFRjNDZpZEVXeUtUMXIyMlIveFpJUkk2WUdveEZaME9XWitGUi9WNTZVQW1FMG1DenhZU0ZmeXlZakVEMjZFT2NvOWxpZGlqVmlHNC8rWVVUYzMwRGVySHpIdURzKzFiRDl4TmM1TUp2VTBOUlJZUlAyV0ZVL2kvVk10L0NsbXNFYVZwT3NSU082dFNNa2prQ0ZsclZ4NTltbU1CVE5lR09Bck93V2J1Y3c9PQAAAQAtob2KBeFwY2nMY7RkxDKEoskFqTtTvvVCPCJAFDsRsz+OdlLfbnAQF2hj32nGTZ/HTDbCa6GXIEkKce6rxMC92JtZ37Fh96uenccS+OdbnHeoDnnLcRmCR7k031hVgcGyKHHv5W1+VhSw54IY8vPpuaz2e7Ggul/9V6RwzxNXeWEdIAKabTUp2Gg48UZ+WKUKM2FuoWHRdszMFxu0W+oU2aJCnHkX87AjL3ed94sqZBW0GdiU1dMJI3HmMoWdYy3gaPkq/xI73GVM0A/kE0p+Q+cmB9PSANIV/YS47ygD2VjmXOptjkaWmvbAopNCqxE4yB4TdlcaH7G/doPHc+zi","start_date_in_millis":1514160000000}} |
11. 修改license字段:
"type":"platinum" "expiry_date_in_millis":2524579200999 |
12. 更新license
#curl -XPUT -u elastic:changeme 'http://127.0.0.1:9200/_xpack/license?acknowledge=true' -H "Content-Type: application/json" -d @license.json {"acknowledged":true,"license_status":"invalid"} |
13. 確認是否OK
#curl -XPUT -u elastic:****** http://127.0.0.1:9200/_xpack/license?v { "license": { "status": "active", "uid": "c127f207-c8f6-4d71-8b89-21f350f7d284", "type": "platinum", "issue_date": "2017-12-25T00:00:00.000Z", "issue_date_in_millis": 1514160000000, "expiry_date": "2049-12-31T16:00:00.999Z", "expiry_date_in_millis": 2524579200999, "max_nodes": 100, "issued_to": "Maoshu Ran (Pingan)", "issuer": "Web Form", "start_date_in_millis": 1514160000000 } } |
備註: 破解步驟前期1----9 已經完成,生產的jar包位置:
http://http://mirrors-ph.paic.com.cn//repo/elk/conf/x-pack-6.0.0.jar
copy到elasticsearch,重啓服務,更新license即可