注意事項: yml文件要頂格,封號後面的空格要保留;所有文件都將保留在home目錄下
|
NN |
DN |
ZK |
ZKFC |
JN |
RM |
NM(任務管理) |
Elasticsearch | Kibana |
Hadoop1 |
Y |
|
Y |
Y |
|
Y |
|
Y | Y |
Hadoop2 |
Y |
Y |
Y |
Y |
Y |
Y |
Y |
Y | |
Hadoop3 |
|
Y |
Y |
|
Y |
|
Y |
Y | |
Hadoop4 |
|
Y |
|
|
Y |
|
Y |
Y |
一、解壓
tar -zvxf elasticsearch-2.2.0.tar.gz
mv elasticsearch-2.2.0 /home/
cd /home/elasticsearch-2.2.0
二、配置
[root@hadoop1 elasticsearch-2.2.0]# vi config/elasticsearch.yml
cluster.name: chenkl
node.name: hadoop1
network.host: 192.168.25.151
# 防止腦裂配置
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping_timeout: 120s
client.transport.ping_timeout: 60s
discovery.zen.ping.unicast.hosts: ["192.168.25.151","192.168.25.152","192.168.25.153","192.168.25.154"]
[root@hadoop1 home]# scp -r elasticsearch-2.2.0/ root@hadoop2:/home/
[root@hadoop1 home]# scp -r elasticsearch-2.2.0/ root@hadoop3:/home/
[root@hadoop1 home]# scp -r elasticsearch-2.2.0/ root@hadoop4:/home/
更改相應的配置
node.name: hadoop1
network.host: 192.168.25.151
三、所有節點創建用戶、並且啓動
~]# adduser bigdata
~]# su bigdata
[bigdata@hadoop1 root]$ exit
[root@hadoop1 elasticsearch-2.2.0]# chown -R bigdata:bigdata ../
[root@hadoop1 elasticsearch-2.2.0]# su bigdata
[bigdata@hadoop1 elasticsearch-2.2.0]$ cd /home/elasticsearch-2.2.0
[bigdata@hadoop1 elasticsearch-2.2.0]$ bin/elasticsearch
Ctrl+c可以停止上面開啓的服務
到此elasticsearch安裝啓動完畢
192.168.25.151:9200
192.168.25.151:9200/_cluster/health
查看集羣狀態,status 爲yellow/green 則說明正常
四、安裝Kibana
kibana-4.4.1-linux-x64.tar.gz
hadoop1, 解壓安裝,修改配置文件vi config/kibana.yml的elasticsearch.url屬性即可
注意配置yml結尾的配置文件都需要冒號後面加空格纔行
elasticsearch安裝插件,所有節點
[root@hadoop1 home]# cd elasticsearch-2.2.0/
[root@hadoop1 elasticsearch-2.2.0]# bin/plugin install license
[root@hadoop1 elasticsearch-2.2.0]# bin/plugin install marvel-agent
裝有kibana的節點安裝
[root@hadoop1 kibana-4.4.1-linux-x64]# bin/kibana plugin --install elasticsearch/marvel/latest
marvel-2.2.1能和kibana-4.4.1完美結合,指定插件版本安裝
[root@hadoop1 kibana-4.4.1-linux-x64]# bin/kibana plugin --install elasticsearch/marvel/2.2.1
卸載或者刪除插件命令
[root@hadoop1 kibana-4.4.1-linux-x64]# bin/kibana plugin -r marvel
接下來啓動elasticsearch和kibana
[bigdata@hadoop1 elasticsearch-2.2.0]$ bin/elasticsearch
瀏覽器輸入:http://192.168.25.151:9200 訪問
[root@node1 kibana-4.4.1-linux-x64]# bin/kibana
index ready 說明啓動成功
瀏覽器輸入:http://192.168.25.151:5601 訪問
五、集成中文分詞器(已經編譯過的),所有節點
ik分詞器和es的版本對應關係,以及如何手動編譯ik分詞器到es中使用
https://github.com/medcl/elasticsearch-analysis-ik/tree/v6.2.4
[bigdata@hadoop1 elasticsearch-2.2.0]# mkdir plugins/ik
[bigdata@hadoop1 ik]# unzip elasticsearch-analysis-ik-1.8.0.zip
[bigdata@hadoop1 ik]# ll
要有這麼幾個文件
common-codec-1.9.jar
common-logging-1.2.jar
config
elastcsearch-analysis-ik-1.8.0.jar
httpclient-4.4.1.jar
httpcore-4.4.1.jar
plugin-descriptor.properties
複製到其他節點上
[bigdata@hadoop1 ik]# scp -r ./ root@hadoop2:/home/elasticsearch-2.2.0/plugins/ik/
[bigdata@hadoop1 ik]# scp -r ./ root@hadoop3:/home/elasticsearch-2.2.0/plugins/ik/
[bigdata@hadoop1 ik]# scp -r ./ root@hadoop4:/home/elasticsearch-2.2.0/plugins/ik/
利用ik分詞器設置搜索域 創建索引庫cd /home/
vi dkjhl.json
{
"settings":{
"number_of_shards":5,
"number_of_replicas":0
},
"mappings":{
"doc":{
"dynamic":"strict",
"properties":{
"id":{"type":"integer","store":"yes"},
"title":{"type":"string","store":"yes","index":"analyzed","analyzer": "ik_max_word","search_analyzer": "ik_max_word"},
"describe":{"type":"string","store":"yes","index":"analyzed","analyzer": "ik_max_word","search_analyzer": "ik_max_word"},
"author":{"type":"string","store":"yes","index":"no"}
}
}
}
}
刪除索引庫
[root@hadoop1 home]# curl -XDELETE 'hadoop1:9200/dkjhl'
創建索引庫
[root@hadoop1 home]# curl -XPOST 'hadoop1:9200/dkjhl' -d @dkjhl.json
下次啓動elasticsearch
所有節點
cd /home/elasticsearch-2.2.0/
su bigdata
後臺啓動
$ bin/elasticsearch -d
hadoop1節點
cd /home/kibana-4.4.1-linux-x64/
後臺啓動
bin/kibana &
等待一會後,加載出status信息,接着輸入“exit”回車即可