首先下載ELK 在官網下載:https://www.elastic.co/cn/downloads/
如果下載比較慢的同學可以加我Q.我這有下載好的文件。能夠分享給你。
這裏咱們的環境使用ubuntu 16.04。我這是在虛擬加搭建的。
es日誌存儲
logstash 採集轉換輸入 輸出
filebeat 日誌實時洞察 相關具體介紹可以觀看 : https://www.elastic.co/cn/beats/filebeat
kibana 展示控制面板 讓你的使用更加方便
軟件版本:
一、安裝es
解壓 ES
tar -zxvf elasticsearch-7.7.1.tar.gz
進入文件夾config 修改 /elk/elasticsearch-7.7.1/config$ vim elasticsearch.yml
修改如下:
配置data目錄時別忘了建文件夾data
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: elk-test
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /home/hcy/elk/elasticsearch-7.7.1/data
#
# Path to log files:
#
path.logs: /home/hcy/elk/elasticsearch-7.7.1/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 192.168.1.129
#
# Set a custom port for HTTP:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["host1", "host2"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: ["node-1", "node-2"]
cluster.initial_master_nodes: ["node-1"]
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
啓動bin 文件下的
hcy@ubuntu:~/elk/elasticsearch-7.7.1/bin$ ./elasticsearch &
頁面訪問 http://192.168.1.129:9200
看看能不能訪問到就行了。
至此安裝es完成
常見問題解決:
1. 報錯:
ERROR: bootstrap checks failed max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
首先切換到root 下
root@ubuntu:/home/hcy# vim /etc/sysctl.conf
添加這一句:
m.max_map_count=655360
root@ubuntu:/home/hcy# sysctl -p
vm.max_map_count = 655360
切換回去用戶。。。。。
2. 報錯:
ERROR: [1] bootstrap checks failed [1]: the default discovery settings are unsuitable for production use; at least one of [discovery.seed_hosts, discovery.seed_providers, cluster.initial_master_nodes] must be configured
解決辦法:配置文件這句放開,改成如下:
cluster.initial_master_nodes: ["node-1"]
二、 安裝Logstash
1.解壓logstash tar包
tar -zxvf logstash-7.7.1.tar.gz
2.運行logstash最基礎的pipeline
bin/logstash -e 'input { stdin { } } output { stdout {} }'
-e:可以直接用命令行配置,無需使用文件配置。當前pipeline從標準輸入獲取數據stdin,並把結構化數據輸出到標準輸出stdout。
Pipeline啓動之後,控制檯輸入hello world,可看到對應輸出。
3. 安裝logstash-input-beats插件
hcy@ubuntu:~/elk/logstash-7.7.1$ ./bin/logstash-plugin install logstash-input-beats
Validating logstash-input-beats
Installing logstash-input-beats
Installation successful
4.創建pipeline配置文件logstash.conf,配置端口5044監聽beats的連接,並創建elasticsearch索引。
如圖:下面代碼
input {
beats {
port => 5044
}
}
output {
elasticsearch {
hosts => "192.168.1.129:9200"
manage_template => false
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}
5.啓動logstash服務
./bin/logstash -f config/logstash.conf &
輸出:
三、安裝Kibana
3.1解壓kibana tar包
tar -zxvf kibana-7.7.1-linux-x86_64.tar.gz
3.2. 修改kibana配置
進入config文件夾 修改
hcy@ubuntu:~/elk/kibana-7.7.1-linux-x86_64/config$ vim kibana.yml
# kibana地址
server.host: "192.168.1.129"
# elasticsearch地址
# elasticsearch.url: "http://192.168.1.129:9200"
elasticsearch.hosts: ["http://192.168.1.129:9200"]
3.3.啓動kibana服務
./bin/kibana &
輸出
訪問 192.168.1.129:5601
能看到界面
你也可以使用一下 example 數據看一眼
很多的哦
四、 安裝Filebeat
1.解壓filebeat tar包
tar -zxvf filebeat-7.7.1-linux-x86_64.tar.gz
2.配置filebeat
我們這裏輸出到logstash,需要添加logstash信息,註釋elasticsearch信息
hcy@ubuntu:~/elk/filebeat-7.7.1-linux-x86_64$ vim filebeat.yml
filebeat.prospectors:
- type: log
enabled: true
paths:
- /opt/apps/elk/*.log
#output.elasticsearch:
#hosts: ["localhost:9200"]
output.logstash:
hosts: ["192.168.1.129:5044"]
3.啓動filebeat
./filebeat -e -c filebeat.yml -d "publish" &
輸出:跑起來就可以。
至此咱們這套ELK系統搭建完畢!!