kubernetes部署Elasticsearch eck

eck簡介

Elastic Cloud on Kubernetes (ECK)可以基於kubernetes operator在kubernetes集羣中自動化部署、管理、編排Elasticsearch、Kibana、APM Server服務。

ECK功能絕不僅限於簡化 Kubernetes 上 Elasticsearch 和 Kibana 的部署工作這一項任務,ECK 專注於簡化所有後期運行工作,例如:

  • 管理和監測多個集羣
  • 輕鬆升級至新的集羣版本
  • 擴大或縮小集羣容量
  • 更改集羣配置
  • 動態調整本地存儲的規模(包括 Elastic Local Volume(一款本地存儲驅動器))
  • 執行備份

在 ECK 上啓動的所有 Elasticsearch 集羣都默認受到保護,這意味着在最初創建後便已啓用加密並受到默認強密碼的保護。

官網:https://www.elastic.co/cn/elastic-cloud-kubernetes

項目地址:https://github.com/elastic/cloud-on-k8s

部署ECK

參考:

https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-quickstart.html

https://github.com/elastic/cloud-on-k8s/tree/master/config/recipes/beats

https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-volume-claim-templates.html

https://github.com/elastic/cloud-on-k8s/tree/master/config/samples

環境信息:
準備3個節點,這裏配置master節點可調度pod:

[root@master01 ~]# kubectl get nodes
NAME       STATUS   ROLES    AGE   VERSION
master01   Ready    master   7d    v1.18.2
node01     Ready    <none>   7d    v1.18.2
node02     Ready    <none>   7d    v1.18.2

eck部署版本:eck v1.1.0

準備nfs存儲

eck數據需要進行持久化,可以使用emptydir類型的臨時捲進行簡單測試,也可以使用nfs或者rook等持久存儲,作爲測試,這裏使用docker臨時在master01節點部署nfs server,提供pvc所需的存儲資源。

docker run -d \
    --name nfs-server \
    --privileged \
    --restart always \
    -p 2049:2049 \
    -v /nfs-share:/nfs-share \
    -e SHARED_DIRECTORY=/nfs-share \
    itsthenetwork/nfs-server-alpine:latest

部署nfs-client-provisioner,動態申請nfs存儲資源,192.168.93.11爲master01節點的ip地址,nfsv4版本nfs.path指定爲 / 即可。
這裏使用helm從阿里雲helm倉庫部署nfs-client-provisioner。

helm repo add apphub https://apphub.aliyuncs.com

helm install nfs-client-provisioner \
  --set nfs.server=192.168.93.11 \
  --set nfs.path=/ \
  apphub/nfs-client-provisioner

查看創建storageClass,默認名稱爲nfs-client,下面部署elasticsearch時會用到該名稱:

[root@master01 ~]# kubectl get sc
NAME         PROVISIONER                            RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
nfs-client   cluster.local/nfs-client-provisioner   Delete          Immediate           true                   172m

所有節點安裝nfs客戶端並啓用rpcbind服務

yum install -y nfs-utils
systemctl enable --now rpcbind

安裝eck operator

部署1.1.0版本eck

kubectl apply -f https://download.elastic.co/downloads/eck/1.1.0/all-in-one.yaml

查看創建的pod

[root@master01 ~]# kubectl -n elastic-system get pods
NAME                 READY   STATUS    RESTARTS   AGE
elastic-operator-0   1/1     Running   1          17m

查看創建的crd,創建了3個crd,apmserver、elasticsearche以及kibana.

[root@master01 ~]# kubectl get crd | grep elastic
apmservers.apm.k8s.elastic.co                                2020-04-27T16:23:08Z
elasticsearches.elasticsearch.k8s.elastic.co                 2020-04-27T16:23:08Z
kibanas.kibana.k8s.elastic.co                                2020-04-27T16:23:08Z

部署es和kibana

下載github中release版本的示例yaml到本地,這裏下載1.1.0版本:

curl -LO https://github.com/elastic/cloud-on-k8s/archive/1.1.0.tar.gz
tar -zxf cloud-on-k8s-1.1.0.tar.gz
cd cloud-on-k8s-1.1.0/config/recipes/beats/

創建命名空間

kubectl apply -f 0_ns.yaml

部署es和kibana,count爲3指定部署3個es節點,前期也可以部署單個節點,之後在進行擴容,指定storageClassName爲nfs-client,添加http部分指定服務類型爲nodePort。

$ cat 1_monitor.yaml
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
  name: monitor
  namespace: beats
spec:
  version: 7.6.2
  nodeSets:
  - name: mdi
    count: 3
    config:
      node.master: true
      node.data: true
      node.ingest: true
      node.store.allow_mmap: false
    volumeClaimTemplates:
    - metadata:
        name: elasticsearch-data
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 50Gi
        storageClassName: nfs-client
  http:
    service:
      spec:
        type: NodePort
---
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
  name: monitor
  namespace: beats
spec:
  version: 7.6.2
  count: 1
  elasticsearchRef:
    name: "monitor"
  http:
    service:
      spec:
        type: NodePort

執行yaml文件部署es和kibana

kubectl apply -f 1_monitor.yaml

如果無法拉取鏡像可以手動替換爲dockerhub鏡像:

docker pull elastic/elasticsearch:7.6.2
docker pull elastic/kibana:7.6.2
docker tag elastic/elasticsearch:7.6.2 docker.elastic.co/elasticsearch/elasticsearch:7.6.2
docker tag elastic/kibana:7.6.2 docker.elastic.co/kibana/kibana:7.6.2

查看創建的Elasticsearch和kibana資源,包括運行狀況,版本和節點數

[root@master01 ~]# kubectl get elasticsearch
NAME         HEALTH   NODES   VERSION   PHASE   AGE
quickstart   green    3       7.6.2     Ready   77m

[root@master01 ~]# kubectl get kibana
NAME         HEALTH   NODES   VERSION   AGE
quickstart   green    1       7.6.2     137m

查看創建的pods:

[root@master01 ~]# kubectl -n beats get pods
NAME                          READY   STATUS    RESTARTS   AGE
monitor-es-mdi-0              1/1     Running   0          109s
monitor-es-mdi-1              1/1     Running   0          9m
monitor-es-mdi-2              1/1     Running   0          3m26s
monitor-kb-54cbdf6b8c-jklqm   1/1     Running   0          9m

查看創建的pv和pvc

[root@master01 ~]# kubectl -n beats get pvc
NAME                                  STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
elasticsearch-data-monitor-es-mdi-0   Bound    pvc-882be3e2-b752-474b-abea-7827b492d83d   10Gi       RWO            nfs-client     3m33s
elasticsearch-data-monitor-es-mdi-1   Bound    pvc-8e6ed97e-7524-47f5-b02c-1ff0d2af33af   10Gi       RWO            nfs-client     3m33s
elasticsearch-data-monitor-es-mdi-2   Bound    pvc-31b5f80d-8fbd-4762-ab69-650eb6619a2e   10Gi       RWO            nfs-client     3m33s

[root@master01 ~]# kubectl -n beats get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                       STORAGECLASS   REASON   AGE
pvc-31b5f80d-8fbd-4762-ab69-650eb6619a2e   50Gi       RWO            Delete           Bound    beats/elasticsearch-data-monitor-es-mdi-2   nfs-client              3m35s
pvc-882be3e2-b752-474b-abea-7827b492d83d   50Gi       RWO            Delete           Bound    beats/elasticsearch-data-monitor-es-mdi-0   nfs-client              3m35s
pvc-8e6ed97e-7524-47f5-b02c-1ff0d2af33af   50Gi       RWO            Delete           Bound    beats/elasticsearch-data-monitor-es-mdi-1   nfs-client              3m35s

實際數據存儲在master01節點/nfs-share目錄下:

[root@master01 ~]# tree /nfs-share/ -L 2
/nfs-share/
├── beats-elasticsearch-data-monitor-es-mdi-0-pvc-250c8eef-4b7e-4230-bd4f-36b911a1d61b
│   └── nodes
├── beats-elasticsearch-data-monitor-es-mdi-1-pvc-c1a538df-92df-4a8e-9b7b-fceb7d395eab
│   └── nodes
└── beats-elasticsearch-data-monitor-es-mdi-2-pvc-dc21c1ba-4a17-4492-9890-df795c06213a
    └── nodes

查看創建的service,部署時已經將es和kibana服務類型改爲NodePort,方便從集羣外訪問。

[root@master01 ~]# kubectl -n beats get svc
NAME              TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
monitor-es-http   NodePort    10.96.82.186    <none>        9200:31575/TCP   9m36s
monitor-es-mdi    ClusterIP   None            <none>        <none>           9m34s
monitor-kb-http   NodePort    10.97.213.119   <none>        5601:30878/TCP   9m35s

默認elasticsearch啓用了驗證,獲取elastic用戶的密碼:

PASSWORD=$(kubectl -n beats get secret monitor-es-elastic-user -o=jsonpath='{.data.elastic}' | base64 --decode)                          

echo $PASSWORD

訪問elasticsearch

瀏覽器訪問elasticsearch:

https://192.168.93.11:31575/

或者從Kubernetes集羣內部訪問elasticsearch的endpoint:

[root@master01 ~]# kubectl run -it --rm centos--image=centos -- sh                          
sh-4.4#
sh-4.4# PASSWORD=gf4mgr5fsbstwx76b8zl8m2g
sh-4.4# curl -u "elastic:$PASSWORD" -k "https://monitor-es-http:9200"
{
  "name" : "quickstart-es-default-2",
  "cluster_name" : "quickstart",
  "cluster_uuid" : "mrDgyhp7QWa7iVuY8Hx6gA",
  "version" : {
    "number" : "7.6.2",
    "build_flavor" : "default",
    "build_type" : "docker",
    "build_hash" : "ef48eb35cf30adf4db14086e8aabd07ef6fb113f",
    "build_date" : "2020-03-26T06:34:37.794943Z",
    "build_snapshot" : false,
    "lucene_version" : "8.4.0",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

訪問kibana
在瀏覽器中訪問kibana,用戶密碼與elasticsearch相同,選擇Explore on my own,可以看到還沒有創建index。

https://192.168.93.11:30878/

部署filebeat

使用dockerhub的鏡像,版本改爲7.6.2.

sed -i 's#docker.elastic.co/beats/filebeat:7.6.0#elastic/filebeat:7.6.2#g' 2_filebeat-kubernetes.yaml

kubectl apply -f 2_filebeat-kubernetes.yaml

查看創建的pods

[root@master01 beats]# kubectl -n beats get pods -l k8s-app=filebeat
NAME             READY   STATUS    RESTARTS   AGE
filebeat-dctrz   1/1     Running   0          9m32s
filebeat-rgldp   1/1     Running   0          9m32s
filebeat-srqf4   1/1     Running   0          9m32s

如果無法拉取鏡像,可手動拉取:

docker pull elastic/filebeat:7.6.2
docker tag elastic/filebeat:7.6.2 docker.elastic.co/beats/filebeat:7.6.2

docker pull elastic/metricbeat:7.6.2
docker tag elastic/metricbeat:7.6.2 docker.elastic.co/beats/metricbeat:7.6.2

訪問kibana,此時可以搜索到filebeat的index,填寫index pattern,選擇@timestrap然後創建index.

在這裏插入圖片描述

查看收集到的日誌

在這裏插入圖片描述

部署metricbeat

sed -i 's#docker.elastic.co/beats/metricbeat:7.6.0#elastic/metricbeat:7.6.2#g' 3_metricbeat-kubernetes.yaml

查看創建的pods

[root@master01 beats]# kubectl -n beats get pods -l  k8s-app=metricbeat
NAME                          READY   STATUS    RESTARTS   AGE
metricbeat-6956d987bb-c96nq   1/1     Running   0          76s
metricbeat-6h42f              1/1     Running   0          76s
metricbeat-dzkxq              1/1     Running   0          76s
metricbeat-lffds              1/1     Running   0          76s

此時訪問kibana可以看到index中多了metricbeat

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章