Abstract
kubeedge 組件
Edged:一個運行在 edge 節點的 agent 程序,管理邊緣的容器化應用程序
EdgeHub:邊緣的通信接口模塊。這是一個 Web 套接字客戶端,負責邊緣計算與雲服務的交互。包括同步雲端資源到邊緣端,以及報告邊緣端 host 和 device 狀態到雲端
CloudHub:雲端通訊接口模塊。一個 Web 套接字服務器,負責監視雲端的更改、緩存以及向 EdgeHub 發送消息
EdgeController:管理邊緣節點。它是一個擴展的 Kubernetes 控制器,管理邊緣節點和 pod 元數據,以便數據可以面向特定的邊緣節點
EventBus:使用 MQTT 處理內部邊緣通信。MQTT 客戶端與 MQTT 服務器(mosquitto)交互,爲其他組件提供發佈和訂閱功能
DeviceTwin:處理設備元數據的設備軟件鏡像。該模塊有助於處理設備狀態並將其同步到雲上。它還爲應用程序提供查詢接口,它連接到一個輕量級數據庫(SQLite)
MetaManager:管理邊緣節點上的元數據。這是 Edged 和 Edgehub 之間的消息處理器。負責在輕量級數據庫(SQLite)中存儲 / 檢索元數據
kubeEdge 包括 cloud 和 edge 部分,在 kubernetes 構建,在 cloud 與 edge 端提供核心的基礎支持,比如網絡,應用,部署以及元數據的同步等。
安裝kubeEdge 需要安裝 kubernetes 集羣,cloud 與 edge 部分
- cloud side: docker, kubernetes cluster and cloudcore.
- edge side:docker, mqtt and edgecore.
Prerequisites
Install docker on cloud and edge side (you can also run other runtime, such as containerd)
Go The minimum required go version is 1.12. You can install this version by using this website.
1. 安裝 cloud side
git clone https://github.com/kubeedge/kubeedge.git $GOPATH/src/github.com/kubeedge/kubeedge cd $GOPATH/src/github.com/kubeedge/kubeedge
1.1 生成證書
KubeEdge 需要 RootCA 證書和一個證書/密鑰對。cloud 和 edge 端都可以使用相同的證書/密鑰對
$GOPATH/src/github.com/kubeedge/kubeedge/build/tools/certgen.sh genCertAndKey edge
/etc/kubeedge/ca
和 /etc/kubeedge/certs
生成,可以拷貝至 edge 服務端
1.2 編譯 cloudcore
cd $GOPATH/src/github.com/kubeedge/kubeedge/ make all WHAT=cloudcore
1.3 創建設備模塊和設備CRD
cd $GOPATH/src/github.com/kubeedge/kubeedge/build/crds/devices kubectl create -f devices_v1alpha1_devicemodel.yaml kubectl create -f devices_v1alpha1_device.yaml
1.4 cloudcore 可執行文件和配置文件
cd $GOPATH/src/github.com/kubeedge/kubeedge/cloud # run edge controller # `conf/` should be in the same directory as the cloned KubeEdge repository # verify the configurations before running cloud(cloudcore) mkdir -p /opt/kubeedge/conf cp cloudcore /opt/kubeedge cp -rf conf/* /opt/kubeedge/conf/
1.4.1 controller.yaml 內容
controller:
kube:
master: # kube-apiserver address (such as:http://localhost:8080)
content_type: "application/vnd.kubernetes.protobuf"
qps: 5
burst: 10
node_update_frequency: 10
kubeconfig: "~/.kube/config" #Enter path to kubeconfig file to enable https connection to k8s apiserver, if master and kubeconfig are both set, master will override any value in kubeconfig.
cloudhub:
protocol_websocket: true # enable websocket protocol
port: 10000 # open port for websocket server
protocol_quic: true # enable quic protocol
quic_port: 10001 # open prot for quic server
max_incomingstreams: 10000 # the max incoming stream for quic server
enable_uds: true # enable unix domain socket protocol
uds_address: unix:///var/lib/kubeedge/kubeedge.sock # unix domain socket address
address: 0.0.0.0
ca: /etc/kubeedge/ca/rootCA.crt
cert: /etc/kubeedge/certs/edge.crt
key: /etc/kubeedge/certs/edge.key
keepalive-interval: 30
write-timeout: 30
node-limit: 10
devicecontroller:
kube:
master: # kube-apiserver address (such as:http://localhost:8080)
content_type: "application/vnd.kubernetes.protobuf"
qps: 5
burst: 10
kubeconfig: "~/.kube/config" #Enter path to kubeconfig file to enable https connection to k8s apiserver,if master and kubeconfig are both set, master will override any value in kubeconfig.
1.5 運行 cloudcore
簡單粗暴運行 nohup ./cloudcore &
1.5.1 可以使用 systemd 掛曆 cloudcore服務
sudo ln build/tools/cloudcore.service /etc/systemd/system/cloudcore.service sudo systemctl daemon-reload sudo systemctl start cloudcore
2. 部署 edge 端服務
kube edge 提供了示例 node.json 在 kubernetes 中添加一個節點。請確保在Kubernetes中添加了Edge節點。運行以下步驟以添加邊緣節點
$GOPATH/src/github.cbiom/kubeedge/kubeedge/build/node.json
更改 metadata.name 爲自己的邊緣節點名稱
2.1 編譯 edgecore
cd $GOPATH/src/github.com/kubeedge/kubeedge make all WHAT=edgecore
2.2 配置 edgecore
# cat /opt/kubeedge/conf/edge.yaml
mqtt: server: tcp://127.0.0.1:1883 # external mqtt broker url. internal-server: tcp://127.0.0.1:1884 # internal mqtt broker url. mode: 0 # 0: internal mqtt broker enable only. 1: internal and external mqtt broker enable. 2: external mqtt broker enable only. qos: 0 # 0: QOSAtMostOnce, 1: QOSAtLeastOnce, 2: QOSExactlyOnce. retain: false # if the flag set true, server will store the message and can be delivered to future subscribers. session-queue-size: 100 # A size of how many sessions will be handled. default to 100. edgehub: websocket: url: wss://0.0.0.0:10000/e632aba927ea4ac2b575ec1603d56f10/edge-node/events certfile: /etc/kubeedge/certs/edge.crt keyfile: /etc/kubeedge/certs/edge.key handshake-timeout: 30 #second write-deadline: 15 # second read-deadline: 15 # second quic: url: 127.0.0.1:10001 cafile: /etc/kubeedge/ca/rootCA.crt certfile: /etc/kubeedge/certs/edge.crt keyfile: /etc/kubeedge/certs/edge.key handshake-timeout: 30 #second write-deadline: 15 # second read-deadline: 15 # second controller: protocol: websocket # websocket, quic heartbeat: 15 # second project-id: e632aba927ea4ac2b575ec1603d56f10 node-id: edge-node edged: register-node-namespace: default hostname-override: edge-node interface-name: eth0 edged-memory-capacity-bytes: 7852396000 node-status-update-frequency: 10 # second device-plugin-enabled: false gpu-plugin-enabled: false image-gc-high-threshold: 80 # percent image-gc-low-threshold: 40 # percent maximum-dead-containers-per-container: 1 docker-address: unix:///var/run/docker.sock runtime-type: docker remote-runtime-endpoint: unix:///var/run/dockershim.sock remote-image-endpoint: unix:///var/run/dockershim.sock runtime-request-timeout: 2 podsandbox-image: kubeedge/pause:3.1 # kubeedge/pause:3.1 for x86 arch , kubeedge/pause-arm:3.1 for arm arch, kubeedge/pause-arm64 for arm64 arch image-pull-progress-deadline: 60 # second cgroup-driver: cgroupfs node-ip: "" cluster-dns: "" cluster-domain: "" mesh: loadbalance: strategy-name: RoundRobin
2.3 運行 edgecore
cp $GOPATH/src/github.com/kubeedge/kubeedge/edge/edgecore /opt/kubeedge cd /opt/kubeedge ./edgecore # or nohup ./edgecore > edgecore.log 2>&1 &
2.4 以 systemd 運行 edgecore
sudo ln build/tools/edgecore.service /etc/systemd/system/edgecore.service sudo systemctl daemon-reload sudo systemctl start edgecore
3. 驗證測試
kubectl apply -f $GOPATH/src/github.com/kubeedge/kubeedge/build/deployment.yaml
參考: