安装使用elasticsearch-7.1.1-linux可能出现的问题的处理

一、在 RHEL 6.2 运行es7.1.1 报错

ERROR: [6] bootstrap checks failed
[1]: max file descriptors [1024] for elasticsearch process is too low, increase to at least [65535]
[2]: memory locking requested for elasticsearch process but memory is not locked
[3]: max number of threads [1024] for user [es] is too low, increase to at least [4096]
[4]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
[5]: system call filters failed to install; check the logs and fix your configuration or disable system call filters at your own risk
[6]: the default discovery settings are unsuitable for production use; at least one of [discovery.seed_hosts, discovery.seed_providers, cluster.initial_master_nodes] must be configured

解决办法:


vim  /etc/security/limits.conf 
*				 soft   nofile           65536
*                hard   nofile           65536
es               soft   memlock          unlimited
es               hard   memlock          unlimited

vim /etc/security/limits.d/90-nproc.conf
*          soft    nproc     2048
root       soft    nproc     ulimited
在   /etc/sysctl.conf文件最后添加一行
vm.max_map_count=262144
然后执行sysctl -p

vim /opt/elasticsearch-7.1.1/config/elasticsearch.yml
bootstrap.system_call_filter: false

二、在CentOS 7 用docker运行es7
docker pull elasticsearch:7.1.1
docker run -itd -p 9200:9200 -p 9300:9300 --name es1 elasticsearch:7.1.1
出现报错:“the default discovery settings are unsuitable for production use; at least one of [discovery.seed_hosts, discovery.seed_providers, cluster.initial_master_nodes] must be configured”
解决办法是设为单节点模式
docker run -itd -p 9200:9200 -p 9300:9300 -e “discovery.type=single-node” --name es1 elasticsearch:7.1.1

三、搭建es集群时,如果集群名字配置有误,会报错


[2019-06-17T21:04:33,627][INFO ][o.e.c.c.ClusterBootstrapService] [node-3] skipping cluster bootstrapping as local node does not match bootstrap requirements: [node-1, node-2]
[2019-06-17T21:04:43,631][WARN ][o.e.c.c.ClusterFormationFailureHelper] [node-3] master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes [node-1, node-2] to bootstrap a cluster: have discovered []; discovery will continue using [10.127.158.45:9300, 10.128.126.189:9300] from hosts providers and [{node-3}{1n0BbUZAQv-BCFFqfVDKAg}{q3dOwtwKSPGT7mLnTqrlGA}{10.127.158.47}{10.127.158.47:9300}{ml.machine_memory=270443114496, xpack.installed=true, ml.max_open_jobs=20}] from last-known cluster state; node term 0, last-accepted version 0 in term 0

配置集群时按照默认的配置文件里的选项写就行:

[root@node189 config]# egrep -v "^#" elasticsearch.yml
cluster.name: logs-es
node.name: node-1
bootstrap.memory_lock: true

network.host: 10.128.126.189
discovery.seed_hosts: ["10.127.158.45", "10.128.126.189",'10.127.158.47']
cluster.initial_master_nodes: ["node-1","node-2"]


bootstrap.system_call_filter: false
http.cors.allow-origin: "*" 
http.cors.enabled: true
discovery.zen.ping_timeout: 30s

四、使用filebeats 7.1.1直接将系统日志数据输入ES 7.1.1的配置

filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /var/log/*.log
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: true
setup.template.settings:
  index.number_of_shards: 1
setup.kibana:
setup.template.name: "filebeat"
setup.template.pattern: "filebeat-*"
output.elasticsearch:
  hosts: ["10.127.158.47:9200"]
  indices:
    - index: "filebeat-10.127.158.47syslogs-%{+yyyy.MM.dd}"

processors:
  -add_locale: ~

但是会发现filebeat输入的数据的timestamp在进入es后被加了8小时
如看filebeat的日志

2019-06-20T16:09:21.457+0800	INFO	[monitoring]	log/log.go:144	Non-zero metrics in the last 30s	{"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":30,"time":{"ms":3}},"total":{"ticks":150,"time":{"ms":9},"value":150},"user":{"ticks":120,"time":{"ms":6}}},"handles":{"limit":{"hard":999999,"soft":999999},"open":9},"info":{"ephemeral_id":"9d9c7b26-ee8a-4622-a955-234dd74b1582","uptime":{"ms":180023}},"memstats":{"gc_next":7123280,"memory_alloc":4971472,"memory_total":15742824}},"filebeat":{"harvester":{"open_files":1,"running":1}},"libbeat":{"config":{"module":{"running":0},"reloads":3},"output":{"read":{"bytes":2355},"write":{"bytes":847}},"pipeline":{"clients":3,"events":{"active":25,"retry":21}}},"registrar":{"states":{"current":19}},"system":{"load":{"1":2.44,"15":2.43,"5":2.39,"norm":{"1":0.0381,"15":0.038,"5":0.0373}}}}}}
2019-06-20T16:09:51.457+0800	INFO	[monitoring]	log/log.go:144	Non-zero metrics in the last 30s	{"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":30,"time":{"ms":3}},"total":{"ticks":160,"time":{"ms":9},"value":160},"user":{"ticks":130,"time":{"ms":6}}},"handles":{"limit":{"hard":999999,"soft":999999},"open":9},"info":{"ephemeral_id":"9d9c7b26-ee8a-4622-a955-234dd74b1582","uptime":{"ms":210023}},"memstats":{"gc_next":7123280,"memory_alloc":5686592,"memory_total":16457944}},"filebeat":{"events":{"active":3,"added":3},"harvester":{"open_files":1,"running":1}},"libbeat":{"config":{"module":{"running":0},"reloads":3},"pipeline":{"clients":3,"events":{"active":28,"published":3,"total":3}}},"registrar":{"states":{"current":19}},"system":{"load":{"1":2.48,"15":2.43,"5":2.41,"norm":{"1":0.0388,"15":0.038,"5":0.0377}}}}}}

时间后都被+0800了。。。
参考https://wyp0596.github.io/2018/04/25/Common/elk_tz/操作,发现pipeline被删除后并不能自动生成,es无法接收到filebeat发送的数据了。
解决方法:
使用logstash做中转,数据->filebeat->logstash->es。
filebeat配置为

filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /var/log/secure
  include_lines: [".*Failed.*",".*Accepted.*"]
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
setup.template.settings:
  index.number_of_shards: 1
setup.kibana:
output.logstash:
  hosts: ["localhost:5044"]
processors:
 - add_locale: ~

logstash配置为

# Sample Logstash configuration for creating a simple
# Beats -> Logstash -> Elasticsearch pipeline.

input {
  beats {
    port => 5044
  }
}

output {
  elasticsearch {
    hosts => ["http://10.127.158.47:9200"]
    index => "%{[@metadata][beat]}-%{[@metadata][version]}-10.128.126.189securelog-%{+YYYY.MM.dd}"
    #user => "elastic"
    #password => "changeme"
  }
}

这时在filebeat的日志中仍然能看到timestamp被加了8小时,但是在kibana中看到的日志的timestamp为当前时间,也不需要像https://wyp0596.github.io/2018/04/25/Common/elk_tz/文中提到的那样修改filebeat的pipeline配置文件中的date项。
五、使用filebeat获取es日志数据然后发送给logstash的配置
修改filebeat目录modules.d下的elasticsearch.yml

# Module: elasticsearch
# Docs: https://www.elastic.co/guide/en/beats/filebeat/7.1/filebeat-module-elasticsearch.html

- module: elasticsearch
  # Server log
  server:
    enabled: true

    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    var.paths: ["/home/elasticsearch_work/logs/*.log"]

    # Convert the timestamp to UTC. Requires Elasticsearch >= 6.1.
    #var.convert_timezone: false

  gc:
    enabled: true
    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    #var.paths:

  audit:
    enabled: true
    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    #var.paths:

    # Convert the timestamp to UTC. Requires Elasticsearch >= 6.1.
    #var.convert_timezone: false

  slowlog:
    enabled: true
    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    var.paths: ["/home/elasticsearch_work/logs/*_index_search_slowlog.log","/home/elasticsearch_work/logs/*_index_indexing_slowlog.log"]

    # Convert the timestamp to UTC. Requires Elasticsearch >= 6.1.
    #var.convert_timezone: false

  deprecation:
    enabled: true
    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    #var.paths:

    # Convert the timestamp to UTC. Requires Elasticsearch >= 6.1.
    #var.convert_timezone: false


五、配置es的用户名密码
1、关闭各es节点,在一个节点上执行

[es@node189 elasticsearch-7.1.1]$ ./bin/elasticsearch-certutil cert -out config/elastic-certificates.p12 -pass ""
This tool assists you in the generation of X.509 certificates and certificate
signing requests for use with SSL/TLS in the Elastic stack.

The 'cert' mode generates X.509 certificate and private keys.
    * By default, this generates a single certificate and key for use
       on a single instance.
    * The '-multiple' option will prompt you to enter details for multiple
       instances and will generate a certificate and key for each one
    * The '-in' option allows for the certificate generation to be automated by describing
       the details of each instance in a YAML file

    * An instance is any piece of the Elastic Stack that requires a SSL certificate.
      Depending on your configuration, Elasticsearch, Logstash, Kibana, and Beats
      may all require a certificate and private key.
    * The minimum required value for each instance is a name. This can simply be the
      hostname, which will be used as the Common Name of the certificate. A full
      distinguished name may also be used.
    * A filename value may be required for each instance. This is necessary when the
      name would result in an invalid file or directory name. The name provided here
      is used as the directory name (within the zip) and the prefix for the key and
      certificate files. The filename is required if you are prompted and the name
      is not displayed in the prompt.
    * IP addresses and DNS names are optional. Multiple values can be specified as a
      comma separated string. If no IP addresses or DNS names are provided, you may
      disable hostname verification in your SSL configuration.

    * All certificates generated by this tool will be signed by a certificate authority (CA).
    * The tool can automatically generate a new CA for you, or you can provide your own with the
         -ca or -ca-cert command line options.

By default the 'cert' mode produces a single PKCS#12 output file which holds:
    * The instance certificate
    * The private key for the instance certificate
    * The CA certificate

If you specify any of the following options:
    * -pem (PEM formatted output)
    * -keep-ca-key (retain generated CA key)
    * -multiple (generate multiple certificates)
    * -in (generate certificates from an input file)
then the output will be be a zip file containing individual certificate/key files


Certificates written to /opt/elasticsearch-7.1.1/config/elastic-certificates.p12

This file should be properly secured as it contains the private key for 
your instance.

This file is a self contained file and can be copied and used 'as is'
For each Elastic product that you wish to configure, you should copy
this '.p12' file to the relevant configuration directory
and then follow the SSL configuration instructions in the product guide.

把生成的证书elastic-certificates.p12发送到其他节点的config目录
2、在所有节点vim config/elasticsearch.yaml添加

xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: elastic-certificates.p12

3、启动各es节点,组成集群,然后执行

[es@node189 elasticsearch-7.1.1]$ ./bin/elasticsearch-setup-passwords  interactive
Initiating the setup of passwords for reserved users elastic,apm_system,kibana,logstash_system,beats_system,remote_monitoring_user.
You will be prompted to enter passwords as the process progresses.
Please confirm that you would like to continue [y/N]y


Enter password for [elastic]: #全设为六个1,要求密码是6位数
Reenter password for [elastic]: 
Enter password for [apm_system]: 
Reenter password for [apm_system]: 
Enter password for [kibana]: 
Reenter password for [kibana]: 
Enter password for [logstash_system]: 
Reenter password for [logstash_system]: 
Enter password for [beats_system]: 
Reenter password for [beats_system]: 
Enter password for [remote_monitoring_user]: 
Reenter password for [remote_monitoring_user]: 
Changed password for user [apm_system]
Changed password for user [kibana]
Changed password for user [logstash_system]
Changed password for user [beats_system]
Changed password for user [remote_monitoring_user]
Changed password for user [elastic]

在任意es节点执行此命令设置密码,其他es节点会自动同步;
也可以运行命令 bin/elasticsearch-setup-passwords auto。这将会为不同的内部堆栈用户生成随机密码。
4、测试登陆 登陆方法

[es@Antiy45 elasticsearch-7.1.1]$ curl -u "elastic:111111" 10.127.158.45:9200/_cat/health?v
epoch      timestamp cluster status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent
1561354439 05:33:59  logs-es green           3         3     42  21    0    0        0             0                  -                100.0%

5、配置kibana;打开 config/kibana.yml 文件。找到类似下面的代码行
#elasticsearch.username: “user”
#elasticsearch.password: “pass”
取消注释设置用户名kibana和密码六个1,然后启动kibana,注意浏览器登陆kibana web页面时登陆的账号为elastic而不是kibana。
6、配置logstash;Logstash配置文件中的output部分用户名如果用logstash_system会报错[2019-06-24T13:49:25,827][ERROR][logstash.outputs.elasticsearch] Encountered a retryable error. Will Retry with exponential backoff {:code=>403, :url=>“http://10.127.158.47:9200/_bulk”}
必须用elastic
默认为

output {
  elasticsearch {
    hosts => ["http://10.127.158.47:9200"]
    index => "%{[@metadata][beat]}-%{[@metadata][version]}-10.127.158.47syslog-%{+YYYY.MM.dd}"   
    #user => "elastic"
    #password => "changeme"
  }

修改为

output {
  elasticsearch {
    hosts => ["http://10.127.158.47:9200"]
    index => "%{[@metadata][beat]}-%{[@metadata][version]}-10.127.158.47syslog-%{+YYYY.MM.dd}"       
    user => "elastic"
    password => "111111"
  }

7、安装cerebro插件
在RHEL 6.2系统上成功安装运行了ES后,下载cerebro-0.8.4.tgz, 解压 ,修改解压后conf目录下application.conf文件最后的es url、登陆用户名和密码,之后执行
./bin/cerebro --version报错 bad root path。
此问题不需要解决,按照readme文件中执行

bin/cerebro -Dhttp.port=1234 -Dhttp.address=10.128.126.189

直接访问1234端口可见插件已经成功启动运行。
在CentOS 7.6系统上未发现此问题。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章