安裝使用elasticsearch-7.1.1-linux可能出現的問題的處理

一、在 RHEL 6.2 運行es7.1.1 報錯

ERROR: [6] bootstrap checks failed
[1]: max file descriptors [1024] for elasticsearch process is too low, increase to at least [65535]
[2]: memory locking requested for elasticsearch process but memory is not locked
[3]: max number of threads [1024] for user [es] is too low, increase to at least [4096]
[4]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
[5]: system call filters failed to install; check the logs and fix your configuration or disable system call filters at your own risk
[6]: the default discovery settings are unsuitable for production use; at least one of [discovery.seed_hosts, discovery.seed_providers, cluster.initial_master_nodes] must be configured

解決辦法:


vim  /etc/security/limits.conf 
*				 soft   nofile           65536
*                hard   nofile           65536
es               soft   memlock          unlimited
es               hard   memlock          unlimited

vim /etc/security/limits.d/90-nproc.conf
*          soft    nproc     2048
root       soft    nproc     ulimited
在   /etc/sysctl.conf文件最後添加一行
vm.max_map_count=262144
然後執行sysctl -p

vim /opt/elasticsearch-7.1.1/config/elasticsearch.yml
bootstrap.system_call_filter: false

二、在CentOS 7 用docker運行es7
docker pull elasticsearch:7.1.1
docker run -itd -p 9200:9200 -p 9300:9300 --name es1 elasticsearch:7.1.1
出現報錯:“the default discovery settings are unsuitable for production use; at least one of [discovery.seed_hosts, discovery.seed_providers, cluster.initial_master_nodes] must be configured”
解決辦法是設爲單節點模式
docker run -itd -p 9200:9200 -p 9300:9300 -e “discovery.type=single-node” --name es1 elasticsearch:7.1.1

三、搭建es集羣時,如果集羣名字配置有誤,會報錯


[2019-06-17T21:04:33,627][INFO ][o.e.c.c.ClusterBootstrapService] [node-3] skipping cluster bootstrapping as local node does not match bootstrap requirements: [node-1, node-2]
[2019-06-17T21:04:43,631][WARN ][o.e.c.c.ClusterFormationFailureHelper] [node-3] master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes [node-1, node-2] to bootstrap a cluster: have discovered []; discovery will continue using [10.127.158.45:9300, 10.128.126.189:9300] from hosts providers and [{node-3}{1n0BbUZAQv-BCFFqfVDKAg}{q3dOwtwKSPGT7mLnTqrlGA}{10.127.158.47}{10.127.158.47:9300}{ml.machine_memory=270443114496, xpack.installed=true, ml.max_open_jobs=20}] from last-known cluster state; node term 0, last-accepted version 0 in term 0

配置集羣時按照默認的配置文件裏的選項寫就行:

[root@node189 config]# egrep -v "^#" elasticsearch.yml
cluster.name: logs-es
node.name: node-1
bootstrap.memory_lock: true

network.host: 10.128.126.189
discovery.seed_hosts: ["10.127.158.45", "10.128.126.189",'10.127.158.47']
cluster.initial_master_nodes: ["node-1","node-2"]


bootstrap.system_call_filter: false
http.cors.allow-origin: "*" 
http.cors.enabled: true
discovery.zen.ping_timeout: 30s

四、使用filebeats 7.1.1直接將系統日誌數據輸入ES 7.1.1的配置

filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /var/log/*.log
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: true
setup.template.settings:
  index.number_of_shards: 1
setup.kibana:
setup.template.name: "filebeat"
setup.template.pattern: "filebeat-*"
output.elasticsearch:
  hosts: ["10.127.158.47:9200"]
  indices:
    - index: "filebeat-10.127.158.47syslogs-%{+yyyy.MM.dd}"

processors:
  -add_locale: ~

但是會發現filebeat輸入的數據的timestamp在進入es後被加了8小時
如看filebeat的日誌

2019-06-20T16:09:21.457+0800	INFO	[monitoring]	log/log.go:144	Non-zero metrics in the last 30s	{"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":30,"time":{"ms":3}},"total":{"ticks":150,"time":{"ms":9},"value":150},"user":{"ticks":120,"time":{"ms":6}}},"handles":{"limit":{"hard":999999,"soft":999999},"open":9},"info":{"ephemeral_id":"9d9c7b26-ee8a-4622-a955-234dd74b1582","uptime":{"ms":180023}},"memstats":{"gc_next":7123280,"memory_alloc":4971472,"memory_total":15742824}},"filebeat":{"harvester":{"open_files":1,"running":1}},"libbeat":{"config":{"module":{"running":0},"reloads":3},"output":{"read":{"bytes":2355},"write":{"bytes":847}},"pipeline":{"clients":3,"events":{"active":25,"retry":21}}},"registrar":{"states":{"current":19}},"system":{"load":{"1":2.44,"15":2.43,"5":2.39,"norm":{"1":0.0381,"15":0.038,"5":0.0373}}}}}}
2019-06-20T16:09:51.457+0800	INFO	[monitoring]	log/log.go:144	Non-zero metrics in the last 30s	{"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":30,"time":{"ms":3}},"total":{"ticks":160,"time":{"ms":9},"value":160},"user":{"ticks":130,"time":{"ms":6}}},"handles":{"limit":{"hard":999999,"soft":999999},"open":9},"info":{"ephemeral_id":"9d9c7b26-ee8a-4622-a955-234dd74b1582","uptime":{"ms":210023}},"memstats":{"gc_next":7123280,"memory_alloc":5686592,"memory_total":16457944}},"filebeat":{"events":{"active":3,"added":3},"harvester":{"open_files":1,"running":1}},"libbeat":{"config":{"module":{"running":0},"reloads":3},"pipeline":{"clients":3,"events":{"active":28,"published":3,"total":3}}},"registrar":{"states":{"current":19}},"system":{"load":{"1":2.48,"15":2.43,"5":2.41,"norm":{"1":0.0388,"15":0.038,"5":0.0377}}}}}}

時間後都被+0800了。。。
參考https://wyp0596.github.io/2018/04/25/Common/elk_tz/操作,發現pipeline被刪除後並不能自動生成,es無法接收到filebeat發送的數據了。
解決方法:
使用logstash做中轉,數據->filebeat->logstash->es。
filebeat配置爲

filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /var/log/secure
  include_lines: [".*Failed.*",".*Accepted.*"]
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
setup.template.settings:
  index.number_of_shards: 1
setup.kibana:
output.logstash:
  hosts: ["localhost:5044"]
processors:
 - add_locale: ~

logstash配置爲

# Sample Logstash configuration for creating a simple
# Beats -> Logstash -> Elasticsearch pipeline.

input {
  beats {
    port => 5044
  }
}

output {
  elasticsearch {
    hosts => ["http://10.127.158.47:9200"]
    index => "%{[@metadata][beat]}-%{[@metadata][version]}-10.128.126.189securelog-%{+YYYY.MM.dd}"
    #user => "elastic"
    #password => "changeme"
  }
}

這時在filebeat的日誌中仍然能看到timestamp被加了8小時,但是在kibana中看到的日誌的timestamp爲當前時間,也不需要像https://wyp0596.github.io/2018/04/25/Common/elk_tz/文中提到的那樣修改filebeat的pipeline配置文件中的date項。
五、使用filebeat獲取es日誌數據然後發送給logstash的配置
修改filebeat目錄modules.d下的elasticsearch.yml

# Module: elasticsearch
# Docs: https://www.elastic.co/guide/en/beats/filebeat/7.1/filebeat-module-elasticsearch.html

- module: elasticsearch
  # Server log
  server:
    enabled: true

    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    var.paths: ["/home/elasticsearch_work/logs/*.log"]

    # Convert the timestamp to UTC. Requires Elasticsearch >= 6.1.
    #var.convert_timezone: false

  gc:
    enabled: true
    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    #var.paths:

  audit:
    enabled: true
    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    #var.paths:

    # Convert the timestamp to UTC. Requires Elasticsearch >= 6.1.
    #var.convert_timezone: false

  slowlog:
    enabled: true
    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    var.paths: ["/home/elasticsearch_work/logs/*_index_search_slowlog.log","/home/elasticsearch_work/logs/*_index_indexing_slowlog.log"]

    # Convert the timestamp to UTC. Requires Elasticsearch >= 6.1.
    #var.convert_timezone: false

  deprecation:
    enabled: true
    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    #var.paths:

    # Convert the timestamp to UTC. Requires Elasticsearch >= 6.1.
    #var.convert_timezone: false


五、配置es的用戶名密碼
1、關閉各es節點,在一個節點上執行

[es@node189 elasticsearch-7.1.1]$ ./bin/elasticsearch-certutil cert -out config/elastic-certificates.p12 -pass ""
This tool assists you in the generation of X.509 certificates and certificate
signing requests for use with SSL/TLS in the Elastic stack.

The 'cert' mode generates X.509 certificate and private keys.
    * By default, this generates a single certificate and key for use
       on a single instance.
    * The '-multiple' option will prompt you to enter details for multiple
       instances and will generate a certificate and key for each one
    * The '-in' option allows for the certificate generation to be automated by describing
       the details of each instance in a YAML file

    * An instance is any piece of the Elastic Stack that requires a SSL certificate.
      Depending on your configuration, Elasticsearch, Logstash, Kibana, and Beats
      may all require a certificate and private key.
    * The minimum required value for each instance is a name. This can simply be the
      hostname, which will be used as the Common Name of the certificate. A full
      distinguished name may also be used.
    * A filename value may be required for each instance. This is necessary when the
      name would result in an invalid file or directory name. The name provided here
      is used as the directory name (within the zip) and the prefix for the key and
      certificate files. The filename is required if you are prompted and the name
      is not displayed in the prompt.
    * IP addresses and DNS names are optional. Multiple values can be specified as a
      comma separated string. If no IP addresses or DNS names are provided, you may
      disable hostname verification in your SSL configuration.

    * All certificates generated by this tool will be signed by a certificate authority (CA).
    * The tool can automatically generate a new CA for you, or you can provide your own with the
         -ca or -ca-cert command line options.

By default the 'cert' mode produces a single PKCS#12 output file which holds:
    * The instance certificate
    * The private key for the instance certificate
    * The CA certificate

If you specify any of the following options:
    * -pem (PEM formatted output)
    * -keep-ca-key (retain generated CA key)
    * -multiple (generate multiple certificates)
    * -in (generate certificates from an input file)
then the output will be be a zip file containing individual certificate/key files


Certificates written to /opt/elasticsearch-7.1.1/config/elastic-certificates.p12

This file should be properly secured as it contains the private key for 
your instance.

This file is a self contained file and can be copied and used 'as is'
For each Elastic product that you wish to configure, you should copy
this '.p12' file to the relevant configuration directory
and then follow the SSL configuration instructions in the product guide.

把生成的證書elastic-certificates.p12發送到其他節點的config目錄
2、在所有節點vim config/elasticsearch.yaml添加

xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: elastic-certificates.p12

3、啓動各es節點,組成集羣,然後執行

[es@node189 elasticsearch-7.1.1]$ ./bin/elasticsearch-setup-passwords  interactive
Initiating the setup of passwords for reserved users elastic,apm_system,kibana,logstash_system,beats_system,remote_monitoring_user.
You will be prompted to enter passwords as the process progresses.
Please confirm that you would like to continue [y/N]y


Enter password for [elastic]: #全設爲六個1,要求密碼是6位數
Reenter password for [elastic]: 
Enter password for [apm_system]: 
Reenter password for [apm_system]: 
Enter password for [kibana]: 
Reenter password for [kibana]: 
Enter password for [logstash_system]: 
Reenter password for [logstash_system]: 
Enter password for [beats_system]: 
Reenter password for [beats_system]: 
Enter password for [remote_monitoring_user]: 
Reenter password for [remote_monitoring_user]: 
Changed password for user [apm_system]
Changed password for user [kibana]
Changed password for user [logstash_system]
Changed password for user [beats_system]
Changed password for user [remote_monitoring_user]
Changed password for user [elastic]

在任意es節點執行此命令設置密碼,其他es節點會自動同步;
也可以運行命令 bin/elasticsearch-setup-passwords auto。這將會爲不同的內部堆棧用戶生成隨機密碼。
4、測試登陸 登陸方法

[es@Antiy45 elasticsearch-7.1.1]$ curl -u "elastic:111111" 10.127.158.45:9200/_cat/health?v
epoch      timestamp cluster status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent
1561354439 05:33:59  logs-es green           3         3     42  21    0    0        0             0                  -                100.0%

5、配置kibana;打開 config/kibana.yml 文件。找到類似下面的代碼行
#elasticsearch.username: “user”
#elasticsearch.password: “pass”
取消註釋設置用戶名kibana和密碼六個1,然後啓動kibana,注意瀏覽器登陸kibana web頁面時登陸的賬號爲elastic而不是kibana。
6、配置logstash;Logstash配置文件中的output部分用戶名如果用logstash_system會報錯[2019-06-24T13:49:25,827][ERROR][logstash.outputs.elasticsearch] Encountered a retryable error. Will Retry with exponential backoff {:code=>403, :url=>“http://10.127.158.47:9200/_bulk”}
必須用elastic
默認爲

output {
  elasticsearch {
    hosts => ["http://10.127.158.47:9200"]
    index => "%{[@metadata][beat]}-%{[@metadata][version]}-10.127.158.47syslog-%{+YYYY.MM.dd}"   
    #user => "elastic"
    #password => "changeme"
  }

修改爲

output {
  elasticsearch {
    hosts => ["http://10.127.158.47:9200"]
    index => "%{[@metadata][beat]}-%{[@metadata][version]}-10.127.158.47syslog-%{+YYYY.MM.dd}"       
    user => "elastic"
    password => "111111"
  }

7、安裝cerebro插件
在RHEL 6.2系統上成功安裝運行了ES後,下載cerebro-0.8.4.tgz, 解壓 ,修改解壓後conf目錄下application.conf文件最後的es url、登陸用戶名和密碼,之後執行
./bin/cerebro --version報錯 bad root path。
此問題不需要解決,按照readme文件中執行

bin/cerebro -Dhttp.port=1234 -Dhttp.address=10.128.126.189

直接訪問1234端口可見插件已經成功啓動運行。
在CentOS 7.6系統上未發現此問題。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章