Kubernetes有着自己特定的调度算法与策略,由Master中的Scheduler组件来实现,根据Node资源使用情况自动调度Pod的创建,一般我们的k8s集群都是这种调度策略。但是有时我们希望可以将某些Pod调度到指定节点上,就采用yaml文件里的nodeSelector来实现Pod的指定调度。
一、首先,我们先看下我们集群环境的node标签情况
kubectl get node --show-labels
对比各个节点的label情况,如果我们要将pod部署在指定节点上,就需要找出对应节点的私有标签,比如k8s-m1节点的私有标签:kubernetes.io/hostname=k8s-m1、node=node1
二、修改pod对应的rc文件,我们先看下集群的默认调度策略机制
apiVersion: v1
kind: ReplicationController
metadata:
name: ecs-portal-rc
spec:
replicas: 3
selector:
app: ecs-portal
template:
metadata:
labels:
app: ecs-portal
typename: ecs-portal
spec:
hostname: ecs-portal-server
nodeSelector:
nodeType: controller
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- ecs-portal
topologyKey: kubernetes.io/hostname
containers:
- name: ecs-portal
image: ecs-portal:Eip.R20200312
resources:
可以看出此时的nodeSelector对应的三个节点的公有标签,就会按照默认调度策略分配node资源给pod部署;
我们现在重新编辑下yaml文件,重置nodeSelector属性:
apiVersion: v1
kind: ReplicationController
metadata:
name: ecs-core-rc
spec:
replicas: 1
selector:
app: ecs-core
template:
metadata:
labels:
# Important: these labels need to match the selector above
# The api server enforces this constraint.
app: ecs-core
typename: ecs-core
spec:
hostname: ecs-core-server
nodeSelector:
node: node1
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- ecs-core
topologyKey: kubernetes.io/hostname
containers:
- name: ecs-core
image: ecs-core:opAdapt0312-1
最后,重新appy对应的yaml文件即可
三、除了系统自带的节点私有标签,我们还可以为指定节点重新打私有标签
kubectl label nodes k8s-m2 type=backEndNode2
然后再修改yaml文件的nodeSelector属性,重新生成rc文件即可