注意事项: yml文件要顶格,封号后面的空格要保留;所有文件都将保留在home目录下
|
NN |
DN |
ZK |
ZKFC |
JN |
RM |
NM(任务管理) |
Elasticsearch | Kibana |
Hadoop1 |
Y |
|
Y |
Y |
|
Y |
|
Y | Y |
Hadoop2 |
Y |
Y |
Y |
Y |
Y |
Y |
Y |
Y | |
Hadoop3 |
|
Y |
Y |
|
Y |
|
Y |
Y | |
Hadoop4 |
|
Y |
|
|
Y |
|
Y |
Y |
一、解压
tar -zvxf elasticsearch-2.2.0.tar.gz
mv elasticsearch-2.2.0 /home/
cd /home/elasticsearch-2.2.0
二、配置
[root@hadoop1 elasticsearch-2.2.0]# vi config/elasticsearch.yml
cluster.name: chenkl
node.name: hadoop1
network.host: 192.168.25.151
# 防止脑裂配置
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping_timeout: 120s
client.transport.ping_timeout: 60s
discovery.zen.ping.unicast.hosts: ["192.168.25.151","192.168.25.152","192.168.25.153","192.168.25.154"]
[root@hadoop1 home]# scp -r elasticsearch-2.2.0/ root@hadoop2:/home/
[root@hadoop1 home]# scp -r elasticsearch-2.2.0/ root@hadoop3:/home/
[root@hadoop1 home]# scp -r elasticsearch-2.2.0/ root@hadoop4:/home/
更改相应的配置
node.name: hadoop1
network.host: 192.168.25.151
三、所有节点创建用户、并且启动
~]# adduser bigdata
~]# su bigdata
[bigdata@hadoop1 root]$ exit
[root@hadoop1 elasticsearch-2.2.0]# chown -R bigdata:bigdata ../
[root@hadoop1 elasticsearch-2.2.0]# su bigdata
[bigdata@hadoop1 elasticsearch-2.2.0]$ cd /home/elasticsearch-2.2.0
[bigdata@hadoop1 elasticsearch-2.2.0]$ bin/elasticsearch
Ctrl+c可以停止上面开启的服务
到此elasticsearch安装启动完毕
192.168.25.151:9200
192.168.25.151:9200/_cluster/health
查看集群状态,status 为yellow/green 则说明正常
四、安装Kibana
kibana-4.4.1-linux-x64.tar.gz
hadoop1, 解压安装,修改配置文件vi config/kibana.yml的elasticsearch.url属性即可
注意配置yml结尾的配置文件都需要冒号后面加空格才行
elasticsearch安装插件,所有节点
[root@hadoop1 home]# cd elasticsearch-2.2.0/
[root@hadoop1 elasticsearch-2.2.0]# bin/plugin install license
[root@hadoop1 elasticsearch-2.2.0]# bin/plugin install marvel-agent
装有kibana的节点安装
[root@hadoop1 kibana-4.4.1-linux-x64]# bin/kibana plugin --install elasticsearch/marvel/latest
marvel-2.2.1能和kibana-4.4.1完美结合,指定插件版本安装
[root@hadoop1 kibana-4.4.1-linux-x64]# bin/kibana plugin --install elasticsearch/marvel/2.2.1
卸载或者删除插件命令
[root@hadoop1 kibana-4.4.1-linux-x64]# bin/kibana plugin -r marvel
接下来启动elasticsearch和kibana
[bigdata@hadoop1 elasticsearch-2.2.0]$ bin/elasticsearch
浏览器输入:http://192.168.25.151:9200 访问
[root@node1 kibana-4.4.1-linux-x64]# bin/kibana
index ready 说明启动成功
浏览器输入:http://192.168.25.151:5601 访问
五、集成中文分词器(已经编译过的),所有节点
ik分词器和es的版本对应关系,以及如何手动编译ik分词器到es中使用
https://github.com/medcl/elasticsearch-analysis-ik/tree/v6.2.4
[bigdata@hadoop1 elasticsearch-2.2.0]# mkdir plugins/ik
[bigdata@hadoop1 ik]# unzip elasticsearch-analysis-ik-1.8.0.zip
[bigdata@hadoop1 ik]# ll
要有这么几个文件
common-codec-1.9.jar
common-logging-1.2.jar
config
elastcsearch-analysis-ik-1.8.0.jar
httpclient-4.4.1.jar
httpcore-4.4.1.jar
plugin-descriptor.properties
复制到其他节点上
[bigdata@hadoop1 ik]# scp -r ./ root@hadoop2:/home/elasticsearch-2.2.0/plugins/ik/
[bigdata@hadoop1 ik]# scp -r ./ root@hadoop3:/home/elasticsearch-2.2.0/plugins/ik/
[bigdata@hadoop1 ik]# scp -r ./ root@hadoop4:/home/elasticsearch-2.2.0/plugins/ik/
利用ik分词器设置搜索域 创建索引库cd /home/
vi dkjhl.json
{
"settings":{
"number_of_shards":5,
"number_of_replicas":0
},
"mappings":{
"doc":{
"dynamic":"strict",
"properties":{
"id":{"type":"integer","store":"yes"},
"title":{"type":"string","store":"yes","index":"analyzed","analyzer": "ik_max_word","search_analyzer": "ik_max_word"},
"describe":{"type":"string","store":"yes","index":"analyzed","analyzer": "ik_max_word","search_analyzer": "ik_max_word"},
"author":{"type":"string","store":"yes","index":"no"}
}
}
}
}
删除索引库
[root@hadoop1 home]# curl -XDELETE 'hadoop1:9200/dkjhl'
创建索引库
[root@hadoop1 home]# curl -XPOST 'hadoop1:9200/dkjhl' -d @dkjhl.json
下次启动elasticsearch
所有节点
cd /home/elasticsearch-2.2.0/
su bigdata
后台启动
$ bin/elasticsearch -d
hadoop1节点
cd /home/kibana-4.4.1-linux-x64/
后台启动
bin/kibana &
等待一会后,加载出status信息,接着输入“exit”回车即可