Centos7 安装配置ELK

1. 介绍

  1. Logstash 是开源的服务器端数据处理管道,能够同时从多个来源采集数据,转换数据,然后将数据发送到您最喜欢的“存储库”中。

  2. Beats 平台集合了多种单一用途数据采集器。它们从成百上千或成千上万台机器和系统向 Logstash 或 Elasticsearch 发送数据。

  3. 通过Kibana ,您可以对自己的 Elasticsearch 进行可视化,还可以在 Elastic Stack 中进行导航,这样您便可以进行各种操作了,从跟踪查询负载,到理解请求如何流经您的整个应用,都能轻松完成

  4. Elasticsearch 是一个分布式、RESTful 风格的搜索和数据分析引擎,能够解决不断涌现出的各种用例。 作为 Elastic Stack 的核心,它集中存储您的数据,帮助您发现意料之中以及意料之外的情况。

  5. 官方文档

名称 下载 安装
Logstash logstash yum
Filebeat filebeat yum
Kibana Kibana yum
Elasticsearch Elasticsearch yum

2. 安装与配置

1. 在/etc/yum.repos.d/目录新建文件elasticsearch.repo

# 新建文件
touch /etc/yum.repos.d/elasticsearch.repo

# 编辑文件内容 
vim /etc/yum.repos.d/elasticsearch.repo
[elasticsearch-7.x]
name=Elasticsearch repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

[logstash-7.x]
name=Elastic repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

[kibana-7.x]
name=Kibana repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

[elastic-7.x]
name=Elastic repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

2. 安装ELK

sudo yum install elasticsearch logstash kibana

# 按需要安装
sudo yum install filebeat

3. 配置

默认配置文件路径:/usr/share/xxx/
xxx为elasticsearch,filebeat,logstash等等

3.1 配置elasticsearch

vim /etc/elasticsearch/elasticsearch.yml

配置信息如下:

# 配置集群的名称
cluster.name: my-elasticsearch 
# 当前节点的名称
node.name: node-1

# 绑定IP地址,外网访问0.0.0.0 否则绑定localhost
network.host: 0.0.0.0
http.port: 9200
# 允许跨域请求
http.cors.enabled: true
http.cors.allow-origin: "*"
http.cors.allow-headers: Authorization,X-Requested-With,Content-Length,Content-Type
# 访问需要密码
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true

# 初始化,必须要配置
cluster.initial_master_nodes: ["node-1"]

3.2 配置TLS和身份验证:

参考文档

1. 创建证书
# 生成证书, 两次回车
/usr/share/elasticsearch/bin/elasticsearch-certutil ca

# 三次回车
/usr/share/elasticsearch//bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12 

# 创建目录
mkdir /etc/elasticsearch/cert
# 复制证书
mv /usr/share/elasticsearch/*.p12 /etc/elasticsearch/cert/

# 修改权限
chown -R elasticsearch:elasticsearch /etc/elasticsearch/cert/
2. 修改配置:
vim /etc/elasticsearch/elasticsearch.yml
http.cors.allow-headers: Authorization,X-Requested-With,Content-Length,Content-Type
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: cert/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: cert/elastic-certificates.p12

重启elasticsearch

service elasticsearch restart
3. 生成客户端证书:
/usr/share/elasticsearch/bin/elasticsearch-certutil cert --ca \
/etc/elasticsearch/cert/elastic-stack-ca.p12 \
-name "CN=esuser,OU=dev,DC=weqhealth,DC=com"

回车
client.p12
回车

拆分证书

mv /usr/share/elasticsearch/client.p12 /etc/elasticsearch/cert/
cd  /etc/elasticsearch/cert/

openssl pkcs12 -in client.p12 -nocerts -nodes > client-key.pem
openssl pkcs12 -in client.p12 -clcerts -nokeys  > client.crt
openssl pkcs12 -in client.p12 -cacerts -nokeys -chain > client-ca.crt

chown -R elasticsearch:elasticsearch /etc/elasticsearch/cert/

3.2 配置kibana.yml

vim /etc/kibana/kibana.yml

配置内容如下:

# 绑定端口
server.port: 5601
# 绑定IP
server.host: "0.0.0.0"
# 
elasticsearch.hosts: ["http://localhost:9200"]

# 访问密码,这里等下要设置,先配置好
elasticsearch.password: "kibanapassword"

# 界面使用中文
i18n.locale: "zh-CN"

3.3 配置logstash

vim /etc/logstash/logstash.yml

配置内容如下:

http.host: "0.0.0.0"
http.port: 9600-9700

# 访问需要验证, 先配置,等下设置密码
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.username: logstash_system
xpack.monitoring.elasticsearch.password: logstashpassword
xpack.monitoring.elasticsearch.hosts: ["https://localhost:9200"]

作为系统服务启动,需要指定Java地址
修改文件:/etc/logstash/startup.options

# 取消注释,并设置为自己的Java路径
JAVACMD=/opt/jdk/bin/java

4. 启动

4.1 启动elasticsearch

# 启动
service elasticsearch start

# 停止
service elasticsearch stop

备注:
启动失败可以在/var/log/elasticsearch/中查看详细日志

可以通过curl 查看是否启动成功

curl http://127.0.0.1:9200

4.2 修改各组件密码

官方文档

  1. 运行命令,设置默认用户密码
/usr/share/elasticsearch/bin/elasticsearch-setup-passwords interactive

按照提示设置密码,这里设置的密码需要与配置文件中配置的密码一致。

如果使用filebeat同样需要指定密码

monitoring.enabled: true
xpack.monitoring.elasticsearch.username: beats_system
xpack.monitoring.elasticsearch.password: filebeatpasssword
  1. 重新启动elasticsearch
service elasticsearch restart
  1. 启动Kibana
# 启动
service kibana start

# 停止
service kibana stop

日志详细:/var/log/kibana/

  1. 启动logstash
    安装系统服务
# 注意,startup.options 需要已经设置了JAVACMD 否则不能启动
/usr/share/logstash/bin/system-install /etc/logstash/startup.options systemd

启动与停止

#启动
systemctl start logstash.service

# 停止
systemctl stop logstash.service
  1. 注册为开机启动
systemctl enable elasticsearch
systemctl enable kibana
systemctl enable logstash.service

使用命令, 查看是否已经enable:

systemctl list-unit-files

5. 查看是否启动成功

curl -u elastic:changeme http://localhost:9200

结果如下,表示成功:

{
  "name" : "node-1",
  "cluster_name" : "delta_grad",
  "cluster_uuid" : "6JCU_klGTlaVjhzx8hwzXQ",
  "version" : {
    "number" : "7.4.2",
    "build_flavor" : "default",
    "build_type" : "rpm",
    "build_hash" : "2f90bbf7b93631e52bafb59b3b049cb44ec25e96",
    "build_date" : "2019-10-28T20:40:44.881551Z",
    "build_snapshot" : false,
    "lucene_version" : "8.2.0",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

验证kibana是否成功

curl -u kibana:changeme http://localhost:9200/_xpack?pretty

通过接口修改密码:

curl -u elastic:changeme -XPOST 'http://127.0.0.1:9200/_security/user/remote_nitoring_user/_password' -H 'Content-Type: application/json' -d'
> {
>   "password" : "changeme"
> }'

问题:

1. the default discovery settings are unsuitable for production use; at least one of [discovery.seed_hosts, discovery.seed_providers, cluster.initial_master_nodes] must be configured

解决方法:

配置文件添加:

node.name: node-1

cluster.initial_master_nodes: [“node-1”]

2. 运行bin/elasticsearch-setup-passwords interactive 报错:ERROR: X-Pack Security is disabled by configuration.

解决方法:

配置文件中添加:

xpack.security.enabled: true

xpack.security.transport.ssl.enabled: true

3. 在阿里云1G内存运行报错Out of Memory Error:

解决方法:修改 jvm.options
-Xms128m
-Xmx128m

4. Failed to start logstash.service: Unit not found.

解决方法:

/usr/share/logstash/bin/system-install /etc/logstash/startup.options systemd

5. /usr/share/logstash/bin/system-install: line 88: #: command not found

解决方法:
参考 参考2

增大Linux交换内存

sudo /bin/dd if=/dev/zero of=/var/swap.1 bs=1M count=1024
sudo /sbin/mkswap /var/swap.1
sudo chmod 600 /var/swap.1
sudo /sbin/swapon /var/swap.1
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章