Kubernetes有着自己特定的調度算法與策略,由Master中的Scheduler組件來實現,根據Node資源使用情況自動調度Pod的創建,一般我們的k8s集羣都是這種調度策略。但是有時我們希望可以將某些Pod調度到指定節點上,就採用yaml文件裏的nodeSelector來實現Pod的指定調度。
一、首先,我們先看下我們集羣環境的node標籤情況
kubectl get node --show-labels
對比各個節點的label情況,如果我們要將pod部署在指定節點上,就需要找出對應節點的私有標籤,比如k8s-m1節點的私有標籤:kubernetes.io/hostname=k8s-m1、node=node1
二、修改pod對應的rc文件,我們先看下集羣的默認調度策略機制
apiVersion: v1
kind: ReplicationController
metadata:
name: ecs-portal-rc
spec:
replicas: 3
selector:
app: ecs-portal
template:
metadata:
labels:
app: ecs-portal
typename: ecs-portal
spec:
hostname: ecs-portal-server
nodeSelector:
nodeType: controller
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- ecs-portal
topologyKey: kubernetes.io/hostname
containers:
- name: ecs-portal
image: ecs-portal:Eip.R20200312
resources:
可以看出此時的nodeSelector對應的三個節點的公有標籤,就會按照默認調度策略分配node資源給pod部署;
我們現在重新編輯下yaml文件,重置nodeSelector屬性:
apiVersion: v1
kind: ReplicationController
metadata:
name: ecs-core-rc
spec:
replicas: 1
selector:
app: ecs-core
template:
metadata:
labels:
# Important: these labels need to match the selector above
# The api server enforces this constraint.
app: ecs-core
typename: ecs-core
spec:
hostname: ecs-core-server
nodeSelector:
node: node1
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- ecs-core
topologyKey: kubernetes.io/hostname
containers:
- name: ecs-core
image: ecs-core:opAdapt0312-1
最後,重新appy對應的yaml文件即可
三、除了系統自帶的節點私有標籤,我們還可以爲指定節點重新打私有標籤
kubectl label nodes k8s-m2 type=backEndNode2
然後再修改yaml文件的nodeSelector屬性,重新生成rc文件即可