KEDA可以對事件驅動的Kubernetes工作負載進行細粒度的自動縮放(包括從零到零的自動縮放)。 KEDA充當Kubernetes Metrics服務器,允許用戶使用專用的Kubernetes自定義資源定義來定義自動縮放規則。
KEDA可以在雲和邊緣上運行,可以與Kubernetes組件(例如Horizontal Pod Autoscaler)進行本地集成,並且沒有外部依賴性。
工作原理
KEDA在Kubernetes中扮演着兩個關鍵角色。首先,它充當代理來激活和停用部署,以在無事件的情況下從零擴展到零。其次,它充當Kubernetes指標服務器,將豐富的事件數據(例如隊列長度或流滯後)暴露給水平Pod自動縮放器以推動橫向擴展。然後由部署決定是否直接從源中使用事件。這樣可以保留豐富的事件集成,並使完成或放棄隊列消息之類的手勢可以立即使用。
Event sources and scalers
KEDA有許多“scalers”,它們既可以檢測是否應激活或停用部署,也可以提供特定事件源的自定義指標。今天,對以下內容提供了縮放器支持:
- AWS CloudWatch
- AWS Simple Queue Service
- Azure Event Hub†
- Azure Service Bus Queues and Topics
- Azure Storage Queues
- GCP PubSub
- Kafka
- Liiklus
- Prometheus
- RabbitMQ
- Redis Lists
當然其他事件源正在增加中,如下:
規劃中
待規劃
- AWS Kinesis
- Kubernetes Events
- MongoDB
- CockroachDB
- MQTT
ScaledObject自定義資源定義
爲了使部署與事件源同步,需要部署ScaledObject自定義資源。 ScaledObjects包含有關要擴展的部署的信息,事件源的元數據(例如,連接字符串密鑰,隊列名稱),輪詢間隔和冷卻時間。 ScaledObject將產生相應的自動擴展資源(HPA定義)以擴展部署。刪除ScaledObjects時,將清除相應的HPA定義。
例如:
apiVersion: keda.k8s.io/v1alpha1
kind: ScaledObject
metadata:
name: kafka-scaledobject
namespace: default
labels:
deploymentName: azure-functions-deployment
spec:
scaleTargetRef:
deploymentName: azure-functions-deployment
pollingInterval: 30
triggers:
- type: kafka
metadata:
# Required
brokerList: localhost:9092
consumerGroup: my-group # Make sure that this consumer group name is the same one as the one that is consuming topics
topic: test-topic
lagThreshold: "50"
部署
可以使用helm部署,也可以yaml部署。利用yaml部署可以執行如下操作:
kubectl apply -f KedaScaleController.yaml
KedaScaleController.yaml 如下:
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: scaledobjects.keda.k8s.io
spec:
group: keda.k8s.io
version: v1alpha1
names:
kind: ScaledObject
singular: scaledobject
plural: scaledobjects
shortNames:
- sco
categories:
- keda
scope: Namespaced
additionalPrinterColumns:
- name: Deployment
type: string
JSONPath: .spec.scaleTargetRef.deploymentName
- name: Triggers
type: string
JSONPath: .spec.triggers[*].type
- name: Age
type: date
JSONPath: .metadata.creationTimestamp
validation:
openAPIV3Schema:
properties:
spec:
required: [triggers]
type: object
properties:
scaleType:
type: string
enum: [deployment, job]
pollingInterval:
type: integer
cooldownPeriod:
type: integer
minReplicaCount:
type: integer
maxReplicaCount:
type: integer
scaleTargetRef:
required: [deploymentName]
type: object
properties:
deploymentName:
type: string
containerName:
type: string
triggers:
type: array
items:
type: object
required: [type, metadata]
properties:
type:
type: string
authenticationRef:
type: object
properties:
name:
type: string
metadata:
type: object
additionalProperties:
type: string
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: triggerauthentications.keda.k8s.io
spec:
group: keda.k8s.io
version: v1alpha1
names:
kind: TriggerAuthentication
singular: triggerauthentication
plural: triggerauthentications
shortNames:
- ta
- triggerauth
categories:
- keda
scope: Namespaced
---
apiVersion: v1
kind: Namespace
metadata:
name: keda
---
kind: ServiceAccount
apiVersion: v1
metadata:
name: keda-operator
namespace: keda
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: keda-operator-service-account-role-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: keda-operator
namespace: keda
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: keda:system:auth-delegator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: keda-operator
namespace: keda
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: keda-auth-reader
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
name: keda-operator
namespace: keda
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: keda-operator
name: keda-operator
namespace: keda
spec:
replicas: 1
selector:
matchLabels:
app: keda-operator
template:
metadata:
labels:
app: keda-operator
name: keda-operator
spec:
serviceAccountName: keda-operator
containers:
- name: keda-operator
image: kedacore/keda:latest
args:
- /adapter
- --secure-port=6443
- --logtostderr=true
- --v=2
ports:
- containerPort: 6443
name: https
- containerPort: 8080
name: http
volumeMounts:
- mountPath: /tmp
name: temp-vol
volumes:
- name: temp-vol
emptyDir: {}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: custom-metrics-resource-reader
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: custom-metrics-resource-reader
subjects:
- kind: ServiceAccount
name: keda-operator
namespace: keda
---
apiVersion: v1
kind: Service
metadata:
name: keda-operator
namespace: keda
spec:
ports:
- name: https
port: 443
targetPort: 6443
- name: http
port: 80
targetPort: 8080
selector:
app: keda-operator
---
apiVersion: apiregistration.k8s.io/v1beta1
kind: APIService
metadata:
name: v1beta1.external.metrics.k8s.io
spec:
service:
name: keda-operator
namespace: keda
group: external.metrics.k8s.io
version: v1beta1
insecureSkipTLSVerify: true
groupPriorityMinimum: 100
versionPriority: 100
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: custom-metrics-resource-reader
rules:
- apiGroups:
- ""
resources:
- namespaces
- pods
- services
- external
verbs:
- get
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: keda-hpa-controller-custom-metrics
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: horizontal-pod-autoscaler
namespace: kube-system
代碼架構解讀
關鍵代碼在pkg文件夾下,如下圖:
- adapter和provider 主要實現了一個custom metrics的adapter,基本規範參照github.com/kubernetes-incubator/custom-metrics-apiserver。如果是外部metrics ,那麼主要是實現GetExternalMetric和ListAllExternalMetrics兩個方法即可。
- apis 和 client 均爲k8s架手架生成的。apis主要存放 crd --ScaledObject對象的定義,而擦;client 爲keda client和 informer 等。通過crd 擴展過k8s的應該對此比較熟悉。
- controller 即爲一個k8s 針對ScaledObject的控制器。實際k8s 的開發中,crd 創建了之後,必須對應編寫對應的controller,針對crd的add,update,delete三種事件作出實際操作。
- signals 則比較簡單,封裝了context.Context。
- kubernetes 比較簡單,總體思路就是根據config,創建kdea client 和kube client,供controller 使用。
- handler 比較關鍵,基本上controller 中的sync 邏輯和metrics-server 提供metrics的接口 均在這裏實現的。
- scalers。就是不同事件源的實現。那麼如果我們想增加一種自己的事件源,在這裏實現即可。
舉例說明一下,當使用客戶端--kubectl 或是client-go部署一個針對deployment A 的ScaledObject crd。想根據kafaka的消息積壓數目進行hpa。那麼controller會監聽到創建了crd,將會對新增動作做出操作。具體就是,根據crd的具體內容創建一個hpa對象,crd 的spec 內容會轉換成hpa 。此時官方k8s的hpa就會通過scalers中的kafka scaler 讀取kafka指定topic的消息數目,然後最終由hpa controller 做出是否擴縮的決定。
結論
KEDA 目前處於Experimental Phase 階段。微軟和紅帽希望社區共同參與。
KEDA 並沒有實現了自己的HPA,其實最終起作用的依舊是社區中的HPA,他只是根據crd 內容生成了HPA 對象,只不過這個metrics 是外部metrics。KEDA 主要是集成了各種事件源。