docker swarm集羣下部署elasticsearch7.6.2集羣+kibana7.6.2+es-head+中文分詞

docker swarm集羣下部署elasticsearch7.6.2集羣+kibana7.6.2+es-head+中文分詞

上一篇文章是:linux Centos7 安裝搭建elasticsearch7.6.2+kibana7.6.2+中文分詞7.6.2詳解

這裏再說一下docker swarm下部署es集羣的思路。此爲個人研究的結果。如果博友有更好的方案還請留言告知。

目錄

docker swarm集羣下部署elasticsearch7.6.2集羣+kibana7.6.2+es-head+中文分詞

環境介紹:

 準備鏡像

配置elasticsearch

創建目錄

添加elasticsearch配置文件

添加Dockerfile文件

添加中文分詞器

啓動elasticsearch

打包elasticsearch鏡像

編寫compose文件

啓動elasticsearch

啓動kibana

創建kibana配置文件

啓動kibana

啓動elasticsearch-head

1、單機運行head

2、集羣模式啓動


環境介紹:

  • 服務器三臺,後邊分別簡稱68、69、70
10.**.**.68 CentOS 7.4 (64 位)
10.**.**.69 CentOS 7.4 (64 位)
10.**.**.70 CentOS 7.4 (64 位)
  • 初始化好的docker swarm集羣,三個節點

 

[root@elcndc2zhda01 ~]# docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
skq2d634av8scq5uumciwuedf *   elcndc2zhda01       Ready               Active              Leader              19.03.8
1j1i345greywliwuyuwro01if     elcndc2zhda02       Ready               Active              Reachable           19.03.8
xqxl9gs9igb7jyo3cohsarehi     elcndc2zhda03       Ready               Active              Reachable           19.03.8

注意:注意:爲了方便我三個集羣節點已經配置了nas共享掛載目錄,爲/data目錄。如果大家沒有配置共享目錄,那有些文件需要三個服務器都創建,比如下邊提到的kibana。所以下邊只要提到/data目錄下的文件,我這裏是三臺服務器共享的,操作一個服務器即可,如果沒有共享目錄,需要每個服務器單獨操作。

 準備鏡像

  • 三臺服務器分別下載官方鏡像:elasticsearch:7.6.2,kibana:7.6.2(需要與es版本一致),elasticsearch-head:5
docker pull docker.elastic.co/elasticsearch/elasticsearch:7.6.2
docker pull docker.elastic.co/kibana/kibana:7.6.2
docker pull mobz/elasticsearch-head:5

 鏡像下載好後,開始準備配置es

配置elasticsearch

  • 創建目錄

在/data下爲是三個服務器分別創建配置文件存放目錄

mkdir /data/erms/es/{kinana,node1,node2,node3} -p

 三個node分別是給三個服務器用的,如果沒有共享目錄也可以手動一個服務器創建一個node目錄。

創建後效果:

[root@elcndc2zhda01 es]# cd /data/erms/es
[root@elcndc2zhda01 es]# ls
kinana  node1  node2  node3
[root@elcndc2zhda01 es]# 
  • 添加elasticsearch配置文件

在node1目錄下新建elasticsearch.yml配置文件內容如下:

將配置文件中的 discovery.seed_hosts內容替換爲自己集羣的ip

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
 cluster.name: es-cluster
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
 node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#path.data: /path/to/data
#
# Path to log files:
#
#path.logs: /path/to/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
 network.host: 0.0.0.0
#
# Set a custom port for HTTP:
#
 http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
 discovery.seed_hosts: ["10.**.**.68:10010","10.**.**.69:10010","10.**.**.70:10010"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
 cluster.initial_master_nodes: ["node-1","node-2","node-3"]
 
# bootstrap.memory_lock: true
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
 http.cors.enabled: true
 http.cors.allow-origin: '*'
 http.cors.allow-headers: Authorization,X-Requested-With,Content-Length,Content-Type

在node2目錄下新建elasticsearch.yml配置文件內容如下:

將配置文件中的 discovery.seed_hosts內容替換爲自己集羣的ip

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
 cluster.name: es-cluster
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
 node.name: node-2
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#path.data: /path/to/data
#
# Path to log files:
#
#path.logs: /path/to/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
 network.host: 0.0.0.0
#
# Set a custom port for HTTP:
#
 http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
 discovery.seed_hosts: ["10.**.**.68:10010","10.**.**.69:10010","10.**.**.70:10010"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
 cluster.initial_master_nodes: ["node-1","node-2","node-3"]
 
# bootstrap.memory_lock: true
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
 http.cors.enabled: true
 http.cors.allow-origin: '*'
 http.cors.allow-headers: Authorization,X-Requested-With,Content-Length,Content-Type

 

在node3目錄下新建elasticsearch.yml配置文件內容如下:

將配置文件中的 discovery.seed_hosts內容替換爲自己集羣的ip

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
 cluster.name: es-cluster
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
 node.name: node-3
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#path.data: /path/to/data
#
# Path to log files:
#
#path.logs: /path/to/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
 network.host: 0.0.0.0
#
# Set a custom port for HTTP:
#
 http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
 discovery.seed_hosts: ["10.**.**.68:10010","10.**.**.69:10010","10.**.**.70:10010"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
 cluster.initial_master_nodes: ["node-1","node-2","node-3"]
 
# bootstrap.memory_lock: true
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
 http.cors.enabled: true
 http.cors.allow-origin: '*'
 http.cors.allow-headers: Authorization,X-Requested-With,Content-Length,Content-Type

 

  • 添加Dockerfile文件

向node1,node2,node3三個目錄下均添加Dockerfile,內容如下:

這裏第三行是爲了添加中文分詞,如果不需要可刪掉。

FROM docker.elastic.co/elasticsearch/elasticsearch:7.6.2
COPY --chown=elasticsearch:elasticsearch elasticsearch.yml /usr/share/elasticsearch/config/
ADD ik /usr/share/elasticsearch/plugins/ik
ADD ik/config /data/erms/es/node1/ik/config
  • 添加中文分詞器

分詞器下載網址:https://github.com/medcl/elasticsearch-analysis-ik/releases

選擇自己es對應的版本,這裏下載的7.6.2:https://github.com/medcl/elasticsearch-analysis-ik/releases/tag/v7.6.2

 

分別在三個node目錄下創建ik目錄,

將elasticsearch-analysis-ik-7.6.2.zip 上傳到ik目錄下,

解壓elasticsearch-analysis-ik-7.6.2.zip 

刪除elasticsearch-analysis-ik-7.6.2.zip 

命令依次如下:

[root@elcndc2zhda01 node1]# mkdir ik
[root@elcndc2zhda01 node1]# cd ik
[root@elcndc2zhda01 ik]# unzip elasticsearch-analysis-ik-7.6.2.zip 
Archive:  elasticsearch-analysis-ik-7.6.2.zip
   creating: config/
  inflating: config/main.dic         
  inflating: config/quantifier.dic   
  inflating: config/extra_single_word_full.dic  
  inflating: config/IKAnalyzer.cfg.xml  
  inflating: config/surname.dic      
  inflating: config/suffix.dic       
  inflating: config/stopword.dic     
  inflating: config/extra_main.dic   
  inflating: config/extra_stopword.dic  
  inflating: config/preposition.dic  
  inflating: config/extra_single_word_low_freq.dic  
  inflating: config/extra_single_word.dic  
  inflating: elasticsearch-analysis-ik-7.6.2.jar  
  inflating: httpclient-4.5.2.jar    
  inflating: httpcore-4.4.4.jar      
  inflating: commons-logging-1.2.jar  
  inflating: commons-codec-1.9.jar   
  inflating: plugin-descriptor.properties  
  inflating: plugin-security.policy  
[root@elcndc2zhda01 ik]# rm -rf elasticsearch-analysis-ik-7.6.2.zip 
[root@elcndc2zhda01 ik]# ls
commons-codec-1.9.jar    config                               httpclient-4.5.2.jar  plugin-descriptor.properties
commons-logging-1.2.jar  elasticsearch-analysis-ik-7.6.2.jar  httpcore-4.4.4.jar    plugin-security.policy

啓動elasticsearch

  • 打包elasticsearch鏡像

三臺服務器需要分別打包鏡像,這裏因爲es主從節點配置有點差異所以三個節點的es配置文件有所不同,才這麼操作。有更好方式的請給我留言。不過我這個方式集羣運行效果還不錯。

 68服務器下執行:

cd /data/erms/es/node1
docker build -t erms/es .

69服務器下執行:

cd /data/erms/es/node2
docker build -t erms/es .

70服務器下執行:

cd /data/erms/es/node3
docker build -t erms/es .
  • 編寫compose文件

三臺服務器分別對應的docker compose文件如下:

注意:這裏es索引數據的持久化儲存,沒有單獨創建目錄存放,而是通過docker的volumes,由docker自行管理的。

這裏向外暴露的端口號是10010

docker-compose-esnode1.yml

version: '3.3'

services:
  ermsesnode1:
      hostname: ermsesnode1
      image: erms/es
      labels:
        "type": "2"
      networks:
        - edoc2_default
      ports:
      - target: 9200
        published: 10010
        mode: host
      environment:
      - node.name=ermsesnode1
      - discovery.seed_hosts=ermsesnode2,ermsesnode3
      - cluster.initial_master_nodes=ermsesnode1,ermsesnode2,ermsesnode3
      - cluster.name=es-cluster
      #- bootstrap.memory_lock=false
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
      volumes:
      - esdata2:/usr/share/elasticsearch/data
      deploy:
        #placement:
        #  constraints:
        #   - node.labels.nodetype == InDrive
        replicas: 1
        restart_policy:
          condition: on-failure
volumes:
  esdata2:
    driver: local
networks:
  edoc2_default:
     external:
       name: macrowing

docker-compose-esnode2.yml

version: '3.3'

services:
  ermsesnode2:
      hostname: ermsesnode2
      image: erms/es
      labels:
        "type": "2"
      networks:
        - edoc2_default
      ports:
      - target: 9200
        published: 10010
        mode: host
      environment:
      - node.name=ermsesnode2
      - discovery.seed_hosts=ermsesnode1,ermsesnode3
      - cluster.initial_master_nodes=ermsesnode1,ermsesnode2,ermsesnode3
      - cluster.name=es-cluster
      #- bootstrap.memory_lock=false
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
      volumes:
      - esdata2:/usr/share/elasticsearch/data
      deploy:
        #placement:
        #  constraints:
        #   - node.labels.nodetype == InDrive
        replicas: 1
        restart_policy:
          condition: on-failure
volumes:
  esdata2:
    driver: local
networks:
  edoc2_default:
     external:
       name: macrowing

 

docker-compose-esnode3.yml

version: '3.3'

services:
  ermsesnode3:
      hostname: ermsesnode3
      image: erms/es
      labels:
        "type": "2"
      networks:
        - edoc2_default
      ports:
      - target: 9200
        published: 10010
        mode: host
      environment:
      - node.name=ermsesnode3
      - discovery.seed_hosts=ermsesnode2,ermsesnode1
      - cluster.initial_master_nodes=ermsesnode1,ermsesnode2,ermsesnode3
      - cluster.name=es-cluster
      #- bootstrap.memory_lock=false
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
      volumes:
      - esdata2:/usr/share/elasticsearch/data
      deploy:
        #placement:
        #  constraints:
        #   - node.labels.nodetype == InDrive
        replicas: 1
        restart_policy:
          condition: on-failure
volumes:
  esdata2:
    driver: local 
networks:
  edoc2_default:
     external:
       name: macrowing
  • 啓動elasticsearch

依次執行三個compose文件,啓動es三個節點。(這裏是都在leader node(主節點)下執行的,每個服務器執行一個也沒事兒。)

[root@elcndc2zhda01 ik]# docker stack deploy -c /data/erms/compose/docker-compose-esnode1.yml erms
Creating service erms_ermsesnode1
[root@elcndc2zhda01 ik]# docker stack deploy -c /data/erms/compose/docker-compose-esnode2.yml erms
Creating service erms_ermsesnode2
[root@elcndc2zhda01 ik]# docker stack deploy -c /data/erms/compose/docker-compose-esnode3.yml erms
Creating service erms_ermsesnode3
[root@elcndc2zhda01 ik]# docker service ls
ID                  NAME                   MODE                REPLICAS            IMAGE                                                       PORTS
cmkfhkxjya79        erms_ermsesnode1       replicated          1/1                 erms/es:latest                                              
vw0kde788sx4        erms_ermsesnode2       replicated          1/1                 erms/es:latest                                              
vrt2jf9qf6d7        erms_ermsesnode3       replicated          1/1                 erms/es:latest  
[root@elcndc2zhda01 ik]# 

一般集羣都會添加portainer,可以通過portainer查看一下效果如下,

說明,三個es節點分別在swarm集羣的三個node上運行的。

打開瀏覽器輸入其中一個服務器的IP:10010接口訪問es如下,即證明es啓動成功。

啓動kibana

  • 創建kibana配置文件

切換到開始時創建的kibana目錄

cd /data/erms/es/kinana/

創建配置文件kibana.yml,內容如下

#
## ** THIS IS AN AUTO-GENERATED FILE **
##
#  
#  # Default Kibana configuration for docker target
server.name: kibana
server.host: "0"
#這裏寫你的es第一個node的地址
elasticsearch.hosts: [ "http://10.**.**.68:10010","http://10.**.**.69:10010","http://10.**.**.70:10010" ]
xpack.monitoring.ui.container.elasticsearch.enabled: true
i18n.locale: zh-CN

創建compose文件,kinana-compse.yml,這裏kibana啓動一個節點就夠了,開放端口號爲10009。內容如下:

version: '3.3'
services:
   kibana:
    image: docker.elastic.co/kibana/kibana:7.6.2
    container_name: kibana
    ports:
      - 10009:5601
    volumes:
      - /etc/localtime:/etc/localtime
      - /data/erms/es/kinana/kibana.yml:/usr/share/kibana/config/kibana.yml:rw
networks:
  edoc2_default:
     external:
       name: macrowing
  • 啓動kibana

[root@elcndc2zhda01 kinana]# docker stack deploy -c /data/erms/es/kinana/kinana-compse.yml erms
[root@elcndc2zhda01 kinana]# docker service ls
ID                  NAME                   MODE                REPLICAS            IMAGE                                                       PORTS
cmkfhkxjya79        erms_ermsesnode1       replicated          1/1                 erms/es:latest                                              
vw0kde788sx4        erms_ermsesnode2       replicated          1/1                 erms/es:latest                                              
vrt2jf9qf6d7        erms_ermsesnode3       replicated          1/1                 erms/es:latest                                              
2gi11fjxl67k        erms_kibana            replicated          1/1                 docker.elastic.co/kibana/kibana:7.6.2                       *:10009->5601/tcp

portainer:

輸入集羣任意ip:10009連接kibana,第一次進入需要選擇,選第二項即可進入一下界面,點擊小扳手進入開發工具界面

kibana也有監測等實用功能。如果需要安裝es head的話,往下看。

啓動elasticsearch-head

1、單機運行head

直接docker啓動,例如我們在69服務器上執行docker命令,暴露端口號爲10001

docker run -d -p 10001:9100 docker.io/mobz/elasticsearch-head:5

啓動成功後打開瀏覽器查看69服務器的ip:10001訪問head,輸入es集羣地址點擊鏈接,可以看到集羣健康狀態、索引以及es和kibana相關信息。

2、集羣模式啓動

創建eshead-compse.yml

version: '3.3'
services:
  elasticsearch-head:
    image: mobz/elasticsearch-head:5
    container_name: elasticsearch-head       
    ports:
       - "10001:9100"           
    links:
       - ermsesnode1

放到集羣中啓動,

docker stack deploy -c /data/erms/compose/eshead-compse.yml erms

查看portainer

這樣,head也被添加到集羣中進行管理了。方便維護,並且輸入集羣任意ip都可以訪問到。通過集羣域名訪問也可以。

使用方式與單機啓動一樣。

 

幾篇關於elasticsearch的文章:

Java+springboot+elasticsearch7.6.2實現分組查詢(平級+層級)並實現script多字段拼接查詢以及大於小於等條件關係查詢的實現

linux Centos7 安裝搭建elasticsearch7.6.2+kibana7.6.2+中文分詞7.6.2詳解

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章