docker-compose快速部署elasticsearch-8.x集羣+kibana

歡迎訪問我的GitHub

這裏分類和彙總了欣宸的全部原創(含配套源碼):https://github.com/zq2599/blog_demos

本篇概覽

  • 前文《Docker下elasticsearch8部署、擴容、基本操作實戰(含kibana)》介紹了用docker快速部署es和kibana的過程,然而整個過程人工操作步驟還是多了點,能不能更簡單些呢?畢竟很多時候大家關注的是使用,不願在部署上費太多時間

  • 藉助docker-compose,可以將es集羣+kibana的安裝過程可以進一步簡化,精簡後的步驟如下圖,已經省的不能再省了...
    在這裏插入圖片描述

  • 本文會按照上述流程進行實戰,一共實戰兩次:第一次部署帶證書賬號密碼的安全版本,第二次部署沒有任何安全檢查的版本,裝好直接訪問使用

  • 請注意docker部署ElasticSearch的適用場景:我這邊只在開發過程中使用,此種方式在生產環境是否適合是有待商榷的,用於生產環境時請您慎重考慮

  • 本篇由以下內容構成

  1. 介紹我這邊實戰的環境供您參考
  2. Linxu用戶需要額外注意的地方
  3. 編寫配置文件
  4. 啓動
  5. 驗證

環境信息

  • 以下是本次實戰的環境信息,可以作爲參考
  1. 操作系統:macOS Monterey(M1 Pro芯片的MacBook Pro,16G內存)
  2. Docker:Docker Desktop 4.7.1 (77678)
  3. ElasticSearch:8.2.2
  4. Kibana:8.2.2

Linux用戶請注意

  • 如果您的環境是Linux,注意要做以下操作,否則es可能會啓動失敗
  1. 用編輯工具打開文件/etc/sysctl.conf

  2. 在尾部添加一行配置vm.max_map_count = 262144,如果已存在就修改,數值不能低於262144

  3. 修改保存,然後執行命令sudo sysctl -p使其立即生效

編寫配置文件

  • 再次確認接下來工作的目標:用docker-compose快速部署es集羣+kibana,這個集羣是帶安全檢查的(自簽證書+賬號密碼)

  • 找個乾淨目錄,新建名爲.env的文件,內容如下,這是給docker-compose用到的配置文件每個配置項都有詳細註釋說明

# elastic賬號的密碼 (至少六個字符)
ELASTIC_PASSWORD=123456

# kibana_system賬號的密碼 (至少六個字符),該賬號僅用於一些kibana的內部設置,不能用來查詢es
KIBANA_PASSWORD=abcdef

# es和kibana的版本
STACK_VERSION=8.2.2

# 集羣名字
CLUSTER_NAME=docker-cluster

# x-pack安全設置,這裏選擇basic,基礎設置,如果選擇了trail,則會在30天后到期
LICENSE=basic
#LICENSE=trial

# es映射到宿主機的的端口
ES_PORT=9200

# kibana映射到宿主機的的端口
KIBANA_PORT=5601

# es容器的內存大小,請根據自己硬件情況調整
MEM_LIMIT=1073741824

# 命名空間,會體現在容器名的前綴上
COMPOSE_PROJECT_NAME=demo
  • 然後是docker-compose.yaml文件,這裏面會用到剛纔創建的.env文件,一共創建了五個容器:啓動操作、三個es組成集羣,一個kibana(多說一句:官方腳本,放心用)
version: "2.2"

services:
  setup:
    image: elasticsearch:${STACK_VERSION}
    volumes:
      - certs:/usr/share/elasticsearch/config/certs
    user: "0"
    command: >
      bash -c '
        if [ x${ELASTIC_PASSWORD} == x ]; then
          echo "Set the ELASTIC_PASSWORD environment variable in the .env file";
          exit 1;
        elif [ x${KIBANA_PASSWORD} == x ]; then
          echo "Set the KIBANA_PASSWORD environment variable in the .env file";
          exit 1;
        fi;
        if [ ! -f config/certs/ca.zip ]; then
          echo "Creating CA";
          bin/elasticsearch-certutil ca --silent --pem -out config/certs/ca.zip;
          unzip config/certs/ca.zip -d config/certs;
        fi;
        if [ ! -f config/certs/certs.zip ]; then
          echo "Creating certs";
          echo -ne \
          "instances:\n"\
          "  - name: es01\n"\
          "    dns:\n"\
          "      - es01\n"\
          "      - localhost\n"\
          "    ip:\n"\
          "      - 127.0.0.1\n"\
          "  - name: es02\n"\
          "    dns:\n"\
          "      - es02\n"\
          "      - localhost\n"\
          "    ip:\n"\
          "      - 127.0.0.1\n"\
          "  - name: es03\n"\
          "    dns:\n"\
          "      - es03\n"\
          "      - localhost\n"\
          "    ip:\n"\
          "      - 127.0.0.1\n"\
          > config/certs/instances.yml;
          bin/elasticsearch-certutil cert --silent --pem -out config/certs/certs.zip --in config/certs/instances.yml --ca-cert config/certs/ca/ca.crt --ca-key config/certs/ca/ca.key;
          unzip config/certs/certs.zip -d config/certs;
        fi;
        echo "Setting file permissions"
        chown -R root:root config/certs;
        find . -type d -exec chmod 750 \{\} \;;
        find . -type f -exec chmod 640 \{\} \;;
        echo "Waiting for Elasticsearch availability";
        until curl -s --cacert config/certs/ca/ca.crt https://es01:9200 | grep -q "missing authentication credentials"; do sleep 30; done;
        echo "Setting kibana_system password";
        until curl -s -X POST --cacert config/certs/ca/ca.crt -u elastic:${ELASTIC_PASSWORD} -H "Content-Type: application/json" https://es01:9200/_security/user/kibana_system/_password -d "{\"password\":\"${KIBANA_PASSWORD}\"}" | grep -q "^{}"; do sleep 10; done;
        echo "All done!";
      '
    healthcheck:
      test: ["CMD-SHELL", "[ -f config/certs/es01/es01.crt ]"]
      interval: 1s
      timeout: 5s
      retries: 120

  es01:
    depends_on:
      setup:
        condition: service_healthy
    image: elasticsearch:${STACK_VERSION}
    volumes:
      - certs:/usr/share/elasticsearch/config/certs
      - esdata01:/usr/share/elasticsearch/data
    ports:
      - ${ES_PORT}:9200
    environment:
      - node.name=es01
      - cluster.name=${CLUSTER_NAME}
      - cluster.initial_master_nodes=es01,es02,es03
      - discovery.seed_hosts=es02,es03
      - ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
      - bootstrap.memory_lock=true
      - xpack.security.enabled=true
      - xpack.security.http.ssl.enabled=true
      - xpack.security.http.ssl.key=certs/es01/es01.key
      - xpack.security.http.ssl.certificate=certs/es01/es01.crt
      - xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
      - xpack.security.http.ssl.verification_mode=certificate
      - xpack.security.transport.ssl.enabled=true
      - xpack.security.transport.ssl.key=certs/es01/es01.key
      - xpack.security.transport.ssl.certificate=certs/es01/es01.crt
      - xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
      - xpack.security.transport.ssl.verification_mode=certificate
      - xpack.license.self_generated.type=${LICENSE}
    mem_limit: ${MEM_LIMIT}
    ulimits:
      memlock:
        soft: -1
        hard: -1
    healthcheck:
      test:
        [
          "CMD-SHELL",
          "curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
        ]
      interval: 10s
      timeout: 10s
      retries: 120

  es02:
    depends_on:
      - es01
    image: elasticsearch:${STACK_VERSION}
    volumes:
      - certs:/usr/share/elasticsearch/config/certs
      - esdata02:/usr/share/elasticsearch/data
    environment:
      - node.name=es02
      - cluster.name=${CLUSTER_NAME}
      - cluster.initial_master_nodes=es01,es02,es03
      - discovery.seed_hosts=es01,es03
      - bootstrap.memory_lock=true
      - xpack.security.enabled=true
      - xpack.security.http.ssl.enabled=true
      - xpack.security.http.ssl.key=certs/es02/es02.key
      - xpack.security.http.ssl.certificate=certs/es02/es02.crt
      - xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
      - xpack.security.http.ssl.verification_mode=certificate
      - xpack.security.transport.ssl.enabled=true
      - xpack.security.transport.ssl.key=certs/es02/es02.key
      - xpack.security.transport.ssl.certificate=certs/es02/es02.crt
      - xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
      - xpack.security.transport.ssl.verification_mode=certificate
      - xpack.license.self_generated.type=${LICENSE}
    mem_limit: ${MEM_LIMIT}
    ulimits:
      memlock:
        soft: -1
        hard: -1
    healthcheck:
      test:
        [
          "CMD-SHELL",
          "curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
        ]
      interval: 10s
      timeout: 10s
      retries: 120

  es03:
    depends_on:
      - es02
    image: elasticsearch:${STACK_VERSION}
    volumes:
      - certs:/usr/share/elasticsearch/config/certs
      - esdata03:/usr/share/elasticsearch/data
    environment:
      - node.name=es03
      - cluster.name=${CLUSTER_NAME}
      - cluster.initial_master_nodes=es01,es02,es03
      - discovery.seed_hosts=es01,es02
      - bootstrap.memory_lock=true
      - xpack.security.enabled=true
      - xpack.security.http.ssl.enabled=true
      - xpack.security.http.ssl.key=certs/es03/es03.key
      - xpack.security.http.ssl.certificate=certs/es03/es03.crt
      - xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
      - xpack.security.http.ssl.verification_mode=certificate
      - xpack.security.transport.ssl.enabled=true
      - xpack.security.transport.ssl.key=certs/es03/es03.key
      - xpack.security.transport.ssl.certificate=certs/es03/es03.crt
      - xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
      - xpack.security.transport.ssl.verification_mode=certificate
      - xpack.license.self_generated.type=${LICENSE}
    mem_limit: ${MEM_LIMIT}
    ulimits:
      memlock:
        soft: -1
        hard: -1
    healthcheck:
      test:
        [
          "CMD-SHELL",
          "curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
        ]
      interval: 10s
      timeout: 10s
      retries: 120

  kibana:
    depends_on:
      es01:
        condition: service_healthy
      es02:
        condition: service_healthy
      es03:
        condition: service_healthy
    image: kibana:${STACK_VERSION}
    volumes:
      - certs:/usr/share/kibana/config/certs
      - kibanadata:/usr/share/kibana/data
    ports:
      - ${KIBANA_PORT}:5601
    environment:
      - SERVERNAME=kibana
      - ELASTICSEARCH_HOSTS=https://es01:9200
      - ELASTICSEARCH_USERNAME=kibana_system
      - ELASTICSEARCH_PASSWORD=${KIBANA_PASSWORD}
      - ELASTICSEARCH_SSL_CERTIFICATEAUTHORITIES=config/certs/ca/ca.crt
    mem_limit: ${MEM_LIMIT}
    healthcheck:
      test:
        [
          "CMD-SHELL",
          "curl -s -I http://localhost:5601 | grep -q 'HTTP/1.1 302 Found'",
        ]
      interval: 10s
      timeout: 10s
      retries: 120

volumes:
  certs:
    driver: local
  esdata01:
    driver: local
  esdata02:
    driver: local
  esdata03:
    driver: local
  kibanadata:
    driver: local
  • 注意:.env和docker-compose.yaml兩個文件在同一目錄下

啓動應用

  • 在docker-compose.yaml文件所在目錄,執行命令docker-compose up -d啓動所有容器
❯ docker-compose up -d
Creating network "demo_default" with the default driver
Pulling setup (elasticsearch:8.2.2)...
8.2.2: Pulling from library/elasticsearch
Digest: sha256:8c666cb1e76650306655b67644a01663f9c7a5422b2c51dd570524267f11ce3d
Status: Downloaded newer image for elasticsearch:8.2.2
Pulling kibana (kibana:8.2.2)...
8.2.2: Pulling from library/kibana
Digest: sha256:cf34801f36a2e79c834b3cdeb0a3463ff34b8d8588c3ccdd47212c4e0753f8a5
Status: Downloaded newer image for kibana:8.2.2
Creating demo_setup_1 ... done
Creating demo_es01_1  ... done
Creating demo_es02_1  ... done
Creating demo_es03_1  ... done
Creating demo_kibana_1 ... done
  • 查看容器狀態,負責啓動的demo_setup_1已退出,其他的正常運行
❯ docker ps -a
CONTAINER ID   IMAGE                 COMMAND                  CREATED          STATUS                      PORTS                              NAMES
c8ce010cddfc   kibana:8.2.2          "/bin/tini -- /usr/l…"   20 minutes ago   Up 20 minutes (healthy)     0.0.0.0:5601->5601/tcp             demo_kibana_1
78662d44ae31   elasticsearch:8.2.2   "/bin/tini -- /usr/l…"   21 minutes ago   Up 21 minutes (healthy)     9200/tcp, 9300/tcp                 demo_es03_1
7e96273872cb   elasticsearch:8.2.2   "/bin/tini -- /usr/l…"   21 minutes ago   Up 21 minutes (healthy)     9200/tcp, 9300/tcp                 demo_es02_1
8b8be1d645ba   elasticsearch:8.2.2   "/bin/tini -- /usr/l…"   21 minutes ago   Up 21 minutes (healthy)     0.0.0.0:9200->9200/tcp, 9300/tcp   demo_es01_1
c48ffb724ca2   elasticsearch:8.2.2   "/bin/tini -- /usr/l…"   21 minutes ago   Exited (0) 20 minutes ago                                      demo_setup_1
  • 看看demo_setup_1的日誌,提示啓動順利
❯ docker logs demo_setup_1
Setting file permissions
Waiting for Elasticsearch availability
Setting kibana_system password
All done!
  • 如果要使用curl命令向ES發請求,需要提前將crt文件從容器中複製出來
docker cp demo_es01_1:/usr/share/elasticsearch/config/certs/es01/es01.crt .

驗證

  • 現在來驗證es集羣和kibana能不能正常工作

  • 瀏覽器訪問https://localhost:9200/,注意是https,會看到以下警告頁面
    在這裏插入圖片描述

  • 此時直接鍵入thisisunsafe再回車,會提示輸入賬號密碼,根據之前的配置賬號elastic,密碼123456

image-20220605091424931

  • 瀏覽器顯示如下,證明es成功響應了
    在這裏插入圖片描述

  • 如果chrome上安裝了eshead插件,此時就能查看es集羣情況了(注意內部的地址欄中,要用https,而非http),如下圖,一共三個節點,es02前面有五角星標誌,表示其主節點的身份
    在這裏插入圖片描述

  • 目前看來es集羣部署和運行都已經正常,再看kibana是否可用

  • 訪問http://localhost:5601/,賬號elastic,密碼123456
    在這裏插入圖片描述

  • 點擊下圖紅框位置,進入輸入命令的頁面
    在這裏插入圖片描述

  • 如下圖,左側輸入創建索引的命令,再點擊紅框中的按鈕,右側會顯示執行結果
    在這裏插入圖片描述

  • 批量寫入兩條記錄
    在這裏插入圖片描述

  • 最後是查詢操作
    在這裏插入圖片描述

清理

  • 如果要刪除es,執行docker-compose down就會刪除容器,但是,此命令不會刪除數據,下次執行docker-compose up -d後,新的es集羣中會出現剛纔創建的test001索引,並且數據也在
  • 這是因爲docker-compose.yaml中使用了數據卷volume存儲es集羣的關鍵數據,這些輸入被保存在宿主機的磁盤上
❯ docker volume ls
DRIVER    VOLUME NAME
local     demo_certs
local     demo_esdata01
local     demo_esdata02
local     demo_esdata03
local     demo_kibanadata
  • 執行docker volume rm demo_certs demo_esdata01 demo_esdata02 demo_esdata03即可將它們徹底清除
  • 以上就是快速部署es集羣+kibana的整個過程了,是不是很簡單呢?

不帶密碼的集羣

  • 有時候咱們部署es不需要安全認證,例如開發環境,或者有防火牆禁止外部訪問的環境,那麼剛纔的部署就不夠用了,咱們需要一個更簡單的、部署完了立刻能用的集羣,接下來動手試試吧

  • 找個乾淨目錄,新建名爲.env的文件,內容如下,和安全版相比去掉了一些不需要的內容

# kibana_system賬號的密碼 (至少六個字符),該賬號僅用於一些kibana的內部設置,不能用來查詢es
KIBANA_PASSWORD=abcdef

# es和kibana的版本
STACK_VERSION=8.2.2

# 集羣名字
CLUSTER_NAME=docker-cluster

# es映射到宿主機的的端口
ES_PORT=9200

# kibana映射到宿主機的的端口
KIBANA_PORT=5601

# es容器的內存大小,請根據自己硬件情況調整
MEM_LIMIT=1073741824

# 命名空間,會體現在容器名的前綴上
COMPOSE_PROJECT_NAME=demo
  • 然後是docker-compose.yaml文件,這裏面會用到剛纔創建的.env文件,和安全版相比去掉了啓動容器,和安全相關的配置和腳本也刪除了
version: "2.2"

services:
  es01:
    image: elasticsearch:${STACK_VERSION}
    volumes:
      - esdata01:/usr/share/elasticsearch/data
    ports:
      - ${ES_PORT}:9200
    environment:
      - node.name=es01
      - cluster.name=${CLUSTER_NAME}
      - cluster.initial_master_nodes=es01,es02,es03
      - discovery.seed_hosts=es02,es03
      - bootstrap.memory_lock=true
      - xpack.security.enabled=false
      - xpack.security.http.ssl.enabled=false
      - xpack.security.transport.ssl.enabled=false
    mem_limit: ${MEM_LIMIT}
    ulimits:
      memlock:
        soft: -1
        hard: -1

  es02:
    depends_on:
      - es01
    image: elasticsearch:${STACK_VERSION}
    volumes:
      - esdata02:/usr/share/elasticsearch/data
    environment:
      - node.name=es02
      - cluster.name=${CLUSTER_NAME}
      - cluster.initial_master_nodes=es01,es02,es03
      - discovery.seed_hosts=es01,es03
      - bootstrap.memory_lock=true
      - xpack.security.enabled=false
      - xpack.security.http.ssl.enabled=false
      - xpack.security.transport.ssl.enabled=false
    mem_limit: ${MEM_LIMIT}
    ulimits:
      memlock:
        soft: -1
        hard: -1

  es03:
    depends_on:
      - es02
    image: elasticsearch:${STACK_VERSION}
    volumes:
      - esdata03:/usr/share/elasticsearch/data
    environment:
      - node.name=es03
      - cluster.name=${CLUSTER_NAME}
      - cluster.initial_master_nodes=es01,es02,es03
      - discovery.seed_hosts=es01,es02
      - bootstrap.memory_lock=true
      - xpack.security.enabled=false
      - xpack.security.http.ssl.enabled=false
      - xpack.security.transport.ssl.enabled=false
    mem_limit: ${MEM_LIMIT}
    ulimits:
      memlock:
        soft: -1
        hard: -1
  kibana:
    image: kibana:${STACK_VERSION}
    volumes:
      - kibanadata:/usr/share/kibana/data
    ports:
      - ${KIBANA_PORT}:5601
    environment:
      - SERVERNAME=kibana
      - ELASTICSEARCH_HOSTS=http://es01:9200
      - ELASTICSEARCH_USERNAME=kibana_system
      - ELASTICSEARCH_PASSWORD=${KIBANA_PASSWORD}
    mem_limit: ${MEM_LIMIT}

volumes:
  esdata01:
    driver: local
  esdata02:
    driver: local
  esdata03:
    driver: local
  kibanadata:
    driver: local
  • 注意:.env和docker-compose.yaml兩個文件在同一目錄下

啓動和驗證

  • 啓動前,請先停止和清理掉剛纔部署的安全版
  • 在docker-compose.yaml文件所在目錄,執行命令docker-compose up -d啓動所有容器,稍等片刻,可見所有容器已經就緒
❯ docker ps -a
CONTAINER ID   IMAGE                 COMMAND                  CREATED         STATUS         PORTS                              NAMES
11663375288d   elasticsearch:8.2.2   "/bin/tini -- /usr/l…"   4 minutes ago   Up 4 minutes   9200/tcp, 9300/tcp                 demo_es03_1
ad6f0390b9cf   elasticsearch:8.2.2   "/bin/tini -- /usr/l…"   4 minutes ago   Up 4 minutes   9200/tcp, 9300/tcp                 demo_es02_1
5080709e5358   kibana:8.2.2          "/bin/tini -- /usr/l…"   4 minutes ago   Up 4 minutes   0.0.0.0:5601->5601/tcp             demo_kibana_1
4b1e576fbfd3   elasticsearch:8.2.2   "/bin/tini -- /usr/l…"   4 minutes ago   Up 4 minutes   0.0.0.0:9200->9200/tcp, 9300/tcp   demo_es01_1
  • 瀏覽器訪問http://localhost:9200/ ,注意是http,收到es響應
    在這裏插入圖片描述
  • chrome的eshead插件也能正常獲取es集羣信息
    在這裏插入圖片描述
  • 訪問kibana,地址是http://localhost:5601/ ,注意是http,能夠正常使用,下圖是成功創建索引的操作
    在這裏插入圖片描述
  • 至此,基於docker-compose部署es集羣+kibana的部署已經完成,藉助嫺熟的複製粘貼操作,快速部署一個es集羣簡直易如反掌,如果您正要快速部署一套es集羣,希望本文能給您一些參考

歡迎關注博客園:程序員欣宸

學習路上,你不孤單,欣宸原創一路相伴...

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章