filebeat + elasticsearch + kibana日誌管理初探

1. 容器化部署elasticsearch

docker run -d -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" docker.elastic.co/elasticsearch/elasticsearch:6.4.2

check:

curl http://127.0.0.1:9200/_cat/health

2. 容器化部署kibana

docker run -d --name kibana -e ELASTICSEARCH_URL="http://your_ip:9200"  -p 5601:5601 docker.elastic.co/kibana/kibana:6.4.2

check:
瀏覽器訪問:localhost:5601

3. 二進制啓動filebeat

獲取二進制包

curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.4.2-linux-x86_64.tar.gz
tar xzvf filebeat-6.4.2-linux-x86_64.tar.gz

編輯修改filebeat.yml

有幾處需要注意,enable必須爲true,且paths中的路徑有訪問權限。

- type: log

  # Change to true to enable this input configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /var/lib/docker/containers/*/*.log
    #- c:\programdata\elasticsearch\logs\*

	# 對外接的elastic search 配置
	
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["localhost:9200"]
sudo chown root filebeat.yml 
sudo nohup ./filebeat -e -c filebeat.yml -d "publish" &

測試過程中,發現kibana總是無法創建index pattern,
這裏查到

but no index will be created in ES until you load data from a source (like Logstash or Beats) or until you create it using the API yourself.
You can check what indices you have in your ES by running a “GET _cat/indices” on localhost:9200 (or your ES host and port).

原理filebeat 一直沒有數據過來,容器化部署時,沒有獲取容器日誌目錄的權限,有空再查下吧。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章