基於RBAC的租戶資源限制

資源限制簡介

命名空間級別:

資源配額(ResourceQuota):用來定義某個命名空間下所有資源的使用限額,包括計算資源的配額,存儲資源的配額,對象數量的配額

LimitRange:基於namespace的資源管理,配置默認值,包括pod和container的最小、最大和defaultLimit、defaultRequests等

容器級別:

配置ResourceQuota後,pod和container未設置compute resources,則會報錯

配置ResourceQuota後,如果配置LimitRange,pod和container未設置compute resources,則走limitRange中配置的默認值,而不會報錯

Ex:

爲命名空間test創建ResourceQuota

apiVersion: v1
kind: ResourceQuota
metadata:
  name: test-compute-resources
  namespace: test
spec:
  hard:
	pods: "4"
	requests.cpu: "1"
	requests.memory: 1Gi
	limits.cpu: "2"
	limits.memory: 2Gi

查看

kubectl describe quota test-compute-resources --namespace=test

Name:                    test-compute-resources
Namespace:               myspace
Resource                 Used  Hard
--------                 ----  ----
limits.cpu               0     2
limits.memory            0     2Gi
pods                     0     4
requests.cpu             0     1
requests.memory          0     1Gi

爲命名空間test創建LimitRange

apiVersion: v1
kind: LimitRange
metadata:
  name: mylimits
  namespace: test
spec:
  limits:
  - max:
      cpu: "2"
      memory: 1Gi
    min:
      cpu: 200m
      memory: 6Mi
    type: Pod
  - default:
      cpu: 300m
      memory: 200Mi
    defaultRequest:
      cpu: 200m
      memory: 100Mi
    max:
      cpu: "2"
      memory: 1Gi
    min:
      cpu: 100m
      memory: 3Mi
    type: Container

查看:

kubectl describe limits mylimits --namespace=test

Name:   mylimits
Namespace:  test
Type        Resource      Min      Max      Default Request      Default Limit      Max Limit/Request Ratio
----        --------      ---      ---      ---------------      -------------      -----------------------
Pod         cpu           200m     2        -                    -                  -
Pod         memory        6Mi      1Gi      -                    -                  -
Container   cpu           100m     2        200m                 300m               -
Container   memory        3Mi      1Gi      100Mi                200Mi              -

不同計算資源類型的限制機制

計算資源類型:

CPU是kubernetes支持的可壓縮資源,當資源緊缺時,會對容器限流

內存是kubernetes支持的不可壓縮資源,當資源緊缺時,會根據pod的優先級,殺死優先級低的pod。

pod的優先級:

kubernetes將容器劃分爲3個QoS登記:

Guaranteed(完全可靠的): 設置limit=request

Burstable(彈性波動,較可靠的): 設置limit=request

Best-Effort(盡力而爲,不太可靠的): 爲設置limit和request

上述三個等級優先級遞減。

操作驗證

配置ResourceQuota而未配置LimitRange情況:

已正常運行的未設置limit和request的pod: 無影響;但pod刪除後無法創建新的pod;

創建未限制資源的pod: 無法創建

創建限制資源容器: 滿足兩個條件request <= limit 及 request <= node剩餘資源。 不滿足前述兩個條件時,無法創建

命名空間ResourceQuota配置超過node剩餘資源:

ResourceQuota可以創建,但當node剩餘資源不足時,雖滿足ResourceQuota條件,但仍無法創建新pod。

Rbac配合ResourceQuota,實現對租戶的資源限制

創建用戶

創建user:testrq

testrq-csr.json

{
  "CN": "testrq",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "TianJin",
       "L": "TianJin",
       "O": "k8s",
      "OU": "System"
    }
  ]
}

userconfig.sh

for targetName in testrq; do
	cfssl gencert --ca /root/ssl/k8s-root-ca.pem --ca-key /root/ssl/k8s-root-ca-key.pem --config /root/ssl/k8s-gencert.json --profile kubernetes $targetName-csr.json | cfssljson --bare $targetName
	echo "Create $targetName kubeconfig..."
	kubectl config set-cluster kubernetes --certificate-authority=/root/ssl/k8s-root-ca.pem --embed-certs=true --server=https://10.0.13.158:6443 --kubeconfig=$targetName.kubeconfig
	kubectl config set-credentials $targetName --client-certificate=$targetName.pem --client-key=$targetName-key.pem --embed-certs=true --kubeconfig=$targetName.kubeconfig
	kubectl config set-context kubernetes --cluster=kubernetes --user=$targetName --kubeconfig=$targetName.kubeconfig
	kubectl config use-context kubernetes --kubeconfig=$targetName.kubeconfig
done

執行./userconfig.sh(注:腳本中設計的根證書等與k8s集羣的一致)

可看到,當前文件夾生成testrq.kubeconfig,用於之後創建k8s資源。

用戶與命名空間綁定

創建role及RoleBinding

testrq-role.yaml

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: testrq
  namespace: test
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - pods/attach
  - pods/exec
  - pods/portforward
  - pods/proxy
  verbs:
  - create
  - delete
  - deletecollection
  - get
  - list
  - patch
  - update
  - watch
- apiGroups:
  - ""
  resources:
  - configmaps
  - endpoints
  - persistentvolumeclaims
  - replicationcontrollers
  - replicationcontrollers/scale
  - services
  - services/proxy
  verbs:
  - create
  - delete
  - deletecollection
  - get
  - list
  - patch
  - update
  - watch
- apiGroups:
  - ""
  resources:
  - secrets
  - serviceaccounts
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - bindings
  - events
  - limitranges
  - namespaces/status
  - pods/log
  - pods/status
  - replicationcontrollers/status
  - resourcequotas
  - resourcequotas/status
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
    resources:
    - serviceaccounts
    verbs:
    - impersonate
- apiGroups:
  - apps
  resources:
  - daemonsets
  - deployments
  - deployments/rollback
  - deployments/scale
  - replicasets
  - replicasets/scale
  - statefulsets
  verbs:
  - create
  - delete
  - deletecollection
  - get
  - list
  - patch
  - update
  - watch
- apiGroups:
  - autoscaling
  resources:
  - horizontalpodautoscalers
  verbs:
  - create
  - delete
  - deletecollection
  - get
  - list
  - patch
  - update
  - watch
- apiGroups:
  - batch
  resources:
  - cronjobs
  - jobs
  verbs:
  - create
  - delete
  - deletecollection
  - get
  - list
  - patch
  - update
  - watch
- apiGroups:
  - extensions
  resources:
  - daemonsets
  - deployments
  - deployments/rollback
  - deployments/scale
  - ingresses
  - replicasets
  - replicasets/scale
  - replicationcontrollers/scale
  verbs:
  - create
  - delete
  - deletecollection
  - get
  - list
  - patch
  - update
  - watch
- apiGroups:
  - policy
  resources:
  - poddisruptionbudgets
  verbs:
  - create
  - delete
  - deletecollection
  - get
  - list
  - patch
  - update
  - watch
- apiGroups:
  - ""
  resources:
  - namespaces
  verbs:
  - get
  - list
  - watch
  
testrq-rolebinding.yaml

kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: testrq 
  namespace: test
subjects:
- kind: User
  name: testrq # 目標用戶
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: testrq # 角色信息
  apiGroup: rbac.authorization.k8s.io

創建role及roleBinding

kubectl create -f testrq-role.yaml

kubectl create -f testrq-rolebinding.yaml

以上初步完成了限制某租戶的資源使用,該租戶只能在指定命名空間操作,且該命名空間設置了資源限制,即resourceQuota。

租戶進行操作命令示例

kubectl create -f test.yaml --kubeconfig testrq.kubeconfig

一些思考

1 對於單副本pod,爲避免資源緊缺時造成pod的誤刪,應該在估算計算資源之後設置limit=request,保證pod優先級最高。

2 對於多副本pod,因爲有多個副本,停止一個副本不會造成太大影響,因此可以設置limit>request。 但對於處於高併發狀態的多副本pod,停止一個pod會暫時造成業務的響應緩慢,這種情景下,我們可以配合Horizontal Pod Autoscaling(pod彈性伸縮),配置伸縮指標小於request,這樣可以保證資源的使用還未達到request使用時,副本已擴展。此時或許會認爲limit配置沒什麼作用了,但是當pod擴展到最大個數時,limit將發揮效果,保證資源使用不超過limit限制。

3 對於一些重要性很低,重啓不影響的pod,可以不設置limit和request,以便更有效地使用集羣資源。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章