k8s之pod调度

  • A+
所属分类:linux技术
摘要

在默认情况下,一个pod在哪个node节点上运行,是由scheduler组件采用相应的算法计算出来的,这个过程是不受人工控制的。


Pod调度

在默认情况下,一个pod在哪个node节点上运行,是由scheduler组件采用相应的算法计算出来的,这个过程是不受人工控制的。

但是在实际过程中,这并不满足需求,因为很多情况下,我们想控制某些pod到达某些节点上,那么应该怎么做呢?

这就要求了解k8s对pod的调度规则,k8s提供了四大类调度方式:

  • 自动调度:运行在哪个节点上完全由scheduler经过一系列的算法得出
  • 定向调度:nodename、nodeselector
  • 亲和性调度:nodeaffinity、podaffinity、podantiaffinity
  • 污点(容忍)调度:Taints、toleration

定向调度

定向调度,指的是利用在pod上声明nodename或者nodeselector,以此将pod调度到期望的node节点上。注意,这里的调度是强制的,

这就意味着即使要调度目标node不存在,也会向上面进行调度,只不过pod运行失败而已。

nodename

nodename用于强制约束将pod调度到指定的name的pod节点上。这种方式,其实是直接跳过scheduler的调度逻辑,直接写入podlist表

接下来,实验一下:创建一个pod-nodename.yaml文件

apiVersion: v1 kind: Pod metadata:    name: pod-nodename   namespace: dev spec:   containers:   - name: nginx     image: nginx:1.17.1   nodeName: node1         #指定调度到node1节点上

 使用配置文件

[root@master ~]# kubectl create -f pod-nodename.yaml  pod/pod-nodename created [root@master ~]# kubectl get pod pod-nodename -n dev -o wide NAME           READY   STATUS    RESTARTS   AGE   IP            NODE    NOMINATED NODE   READINESS GATES pod-nodename   1/1     Running   0          49s   10.244.2.35   node1   <none>           <none>

可以发现pod运行在node1

接下来删除pod,更改配置文件为node3

apiVersion: v1 kind: Pod metadata:    name: pod-nodename   namespace: dev spec:   containers:   - name: nginx     image: nginx:1.17.1   nodeName: node3         #指定调度到node1节点上

使用配置文件

[root@master ~]# kubectl delete -f pod-nodename.yaml  pod "pod-nodename" deleted [root@master ~]# vim pod-nodename.yaml  [root@master ~]# kubectl create -f pod-nodename.yaml  pod/pod-nodename created [root@master ~]# kubectl get pod pod-nodename -n dev -o wide NAME           READY   STATUS    RESTARTS   AGE   IP       NODE    NOMINATED NODE   READINESS GATES pod-nodename   0/1     Pending   0          21s   <none>   node3   <none>           <none>

可以看见虽然被指定在了node3,但是由于node3不存在,pod无法启动

 

 nodeselector

nodeselector用于将pod调度到添加了指定标签的node节点上,它是通过k8s的label-selector机制实现的,也即是说,在pod创建之前,会由

scheduler使用matchnodeselector调度策略进行label匹配,找出目标node,然后将pod调度到目标节点,该匹配规则是强制约束

接下来,实验一下:

1.首先分别为node节点添加标签

[root@master ~]# kubectl label nodes node1 nodeenv=pro node/node1 labeled [root@master ~]# kubectl label nodes node2 nodeenv=test node/node2 labeled

创建一个pod-nodeselector.yaml文件,并使用它创建pod

apiVersion: v1 kind: Pod metadata:    name: pod-nodeselector   namespace: dev spec:   containers:   - name: nginx     image: nginx:1.17.1   nodeSelector:      nodeenv: pro  #指定调度到具有nodeenv=pro标签的节点上

使用配置文件(这里省略过程)

[root@master ~]# kubectl get pod pod-nodeselector -n dev -o wide NAME               READY   STATUS    RESTARTS   AGE   IP            NODE    NOMINATED NODE   READINESS GATES pod-nodeselector   1/1     Running   0          22m   10.244.2.36   node1   <none>           <none>

可以看见pod已经被调度到了node1

 

亲和性调度

上面两种定向调度的方式使用起来非常方便,但是也有一定的问题,那就是如果没有满足条件的node,那么pod将不会被运行,即使在集群中还有可用的node列表也不行,这就限制了它的使用场景。

基于上面的问题,k8s还提供了一种亲和性调度,它在nodeselector的基础之上进行了扩展,可以通过配置的方式,实现优先选择满足条件的node进行调度,如果没有,也可以调度到不满足条件的节点上,使调度更加灵活。

affinity主要分为三类:

  • nodeaffinity(node亲和性):以node为目标,解决node可以调度到哪些node的问题
  • podaffinity(pod亲和性):以pod为目标,解决pod可以和哪些已存在的pod部署到同一个拓扑域中的问题
  • podantiaffinity(pod反亲和性):以pod为目标,解决pod不能和哪些已存在的pod部署到同一个拓扑域中的问题

关于亲和性(反亲和性)使用场景的说明:

  • 亲和性:如果两个应用频繁交互,那就有必要利用亲和性让两个应用尽可能地靠近,这样可以减少网络通信而带来的性能损耗。
  • 反亲和性:当应用采用副本部署时,有必要采用反亲和性让各个应用实例打散分布在各个node上,这样可以提高服务的高可用性。

nodeaffinity

关系符的使用说明:

- matchExpressions:   - key: nodeenv         #匹配存在标签的key为nodeenv的节点     operator: Exists   - key: nodeenv         #匹配存在标签的key为nodeenv,且value是"xxx""yyy"的节点     operator: In     values: ["xxx","yyy"]   - key: nodeenv         #匹配存在标签的key为nodeenv,且value大于"xxx"的节点     operator: Gt     values: "xxx"

接下来首先演示一下requireDuringSchedullingIgnoreDuringExecution

创建pod-nodeaffinity-required.yaml

apiVersion: v1 kind: Pod metadata:    name: pod-nodeaffinity-required   namespace: dev spec:   containers:   - name: nginx     image: nginx:1.17.1   affinity:   #亲和性设置     nodeAffinity:   #设置node亲和性       requiredDuringSchedulingIgnoredDuringExecution:  #硬限制         nodeSelectorTerms:         - matchExpressions:           - key: nodeenv             operator: In             values: ["xxx","yyy"]

 创建并使用配置文件

[root@master ~]# vim pod-nodeaffinity-required.yaml [root@master ~]# kubectl create -f pod-nodeaffinity-required.yaml  pod/pod-nodeaffinity-required created [root@master ~]# kubectl get pod pod-nodeaffinity-required -n dev NAME                        READY   STATUS    RESTARTS   AGE pod-nodeaffinity-required   0/1     Pending   0          14s

发现启动失败,查看详细描述

[root@master ~]# kubectl describe pod pod-nodeaffinity-required -n dev Events:   Type     Reason            Age        From               Message   ----     ------            ----       ----               -------   Warning  FailedScheduling  <unknown>  default-scheduler  0/3 nodes are available: 3 node(s) didn't match node selector.   Warning  FailedScheduling  <unknown>  default-scheduler  0/3 nodes are available: 3 node(s) didn't match node selector.

删除pod,重新编辑配置文件的values

apiVersion: v1 kind: Pod metadata:    name: pod-nodeaffinity-required   namespace: dev spec:   containers:   - name: nginx     image: nginx:1.17.1   affinity:   #亲和性设置     nodeAffinity:   #设置node亲和性       requiredDuringSchedulingIgnoredDuringExecution:  #硬限制         nodeSelectorTerms:         - matchExpressions:           - key: nodeenv             operator: In             values: ["pro","yyy"]

[root@master ~]# kubectl delete -f pod-nodeaffinity-required.yaml  pod "pod-nodeaffinity-required" deleted [root@master ~]# vim pod-nodeaffinity-required.yaml  [root@master ~]# kubectl create -f pod-nodeaffinity-required.yaml  pod/pod-nodeaffinity-required created [root@master ~]# kubectl get pod pod-nodeaffinity-required -n dev NAME                        READY   STATUS    RESTARTS   AGE pod-nodeaffinity-required   1/1     Running   0          35s

发现创建成功

 

接下来再演示一下preferredDuringSchedulingIgnoredDuringExecution,

创建pod-nodeaffinity-preferred.yaml

apiVersion: v1 kind: Pod metadata:    name: pod-nodeaffinity-preferred   namespace: dev spec:   containers:   - name: nginx     image: nginx:1.17.1   affinity:   #亲和性设置     nodeAffinity:   #设置node亲和性       preferredDuringSchedulingIgnoredDuringExecution:  #软限制       - weight: 1         preference:           matchExpressions:           - key: nodeenv             operator: In             values: ["xxx","yyy"]

创建并使用配置文件

[root@master ~]# vim pod-nodeaffinity-preferred.yaml [root@master ~]# kubectl create -f pod-nodeaffinity-preferred.yaml  pod/pod-nodeaffinity-preferred created [root@master ~]# kubectl get pod pod-nodeaffinity-preferred -n dev -o wide NAME                         READY   STATUS    RESTARTS   AGE   IP            NODE    NOMINATED NODE   READINESS GATES pod-nodeaffinity-preferred   1/1     Running   0          23s   10.244.2.38   node1   <none>           <none>

发现pod被调度到了node1

 

nodeaffinity规则设置的注意事项:

  • 如果同时定义了nodeselector和nodeaffinity,那么必须两个条件都得到满足,pod才能运行在指定的node上
  • 如果nodeaffinity指定了多个nodeSelectorTerms,那么只需要其中一个能匹配成功即可
  • 如果一个nodeSelectorTerms中有多个matchExpressions,则一个节点必须满足所有的才能匹配成功
  • 如果一个pod所在的node再pod运行期间其标签发生了改变,不再符合该pod节点亲和性需求,则系统将忽略此变化

 

podaffinity

podaffinity主要实现以运行的pod为参照,实现让新创建的pod跟参照pod在一个区域的功能

topologyKey用于指定调度时作用域,例如:

  • 如果指定为kubernetes.io/hostname,那就是以Node节点为区分范围
  • 如果指定为beta.kubernetes.io/os,则以Node节点的操作系统类型来区分

接下来,演示下requireDuringSchedullingIgnoreDuringExecution

创建一个参照pod,pod-podaffinity-target.yaml

apiVersion: v1 kind: Pod metadata:    name: pod-podaffinity-target   namespace: dev   labels:     podenv: pro   #设置标签 spec:   containers:   - name: nginx     image: nginx:1.17.1   nodeName: node1  #将目标pod明确指定到node1上

[root@master ~]# vim pod-podaffinity-target.yaml [root@master ~]# kubectl create -f pod-podaffinity-target.yaml  pod/pod-podaffinity-target created [root@master ~]# kubectl get pod pod-podaffinity-target -n dev -o wide --show-labels NAME                     READY   STATUS    RESTARTS   AGE     IP            NODE    NOMINATED NODE   READINESS GATES   LABELS pod-podaffinity-target   1/1     Running   0          2m47s   10.244.2.39   node1   <none>           <none>            podenv=pro

创建pod-podaffinity-required.yaml

apiVersion: v1 kind: Pod metadata:    name: pod-podaffinity-required   namespace: dev spec:   containers:   - name: nginx     image: nginx:1.17.1   affinity:   #亲和性设置     podAffinity:   #设置pod亲和性       requiredDuringSchedulingIgnoredDuringExecution:  #硬限制       -  labelSelector:            matchExpressions: #匹配env的值在["xxx","yyy"]中的标签            - key: podenv              operator: In              values: ["xxx","yyy"]          topologyKey: kubernetes.io/hostname

[root@master ~]# vim pod-podaffinity-required.yaml [root@master ~]# kubectl create -f pod-podaffinity-required.yaml  pod/pod-podaffinity-required created [root@master ~]# kubectl get pod pod-podaffinity-required -n dev -o wide --show-labels NAME                       READY   STATUS    RESTARTS   AGE   IP       NODE     NOMINATED NODE   READINESS GATES   LABELS pod-podaffinity-required   0/1     Pending   0          24s   <none>   <none>   <none>           <none>            <none>

发现调度失败,查看调度信息

[root@master ~]# kubectl describe pod pod-podaffinity-required -n dev Events:   Type     Reason            Age        From               Message   ----     ------            ----       ----               -------   Warning  FailedScheduling  <unknown>  default-scheduler  0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 node(s) didn't match pod affinity rules.   Warning  FailedScheduling  <unknown>  default-scheduler  0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 node(s) didn't match pod affinity rules.

删除pod,重新编辑配置文件

apiVersion: v1 kind: Pod metadata:    name: pod-podaffinity-required   namespace: dev spec:   containers:   - name: nginx     image: nginx:1.17.1   affinity:   #亲和性设置     podAffinity:   #设置pod亲和性       requiredDuringSchedulingIgnoredDuringExecution:  #硬限制       -  labelSelector:            matchExpressions: #匹配env的值在["xxx","yyy"]中的标签            - key: podenv              operator: In              values: ["pro","yyy"]          topologyKey: kubernetes.io/hostname

[root@master ~]# kubectl delete -f pod-podaffinity-required.yaml  pod "pod-podaffinity-required" deleted [root@master ~]# vim pod-podaffinity-required.yaml  [root@master ~]# kubectl create -f pod-podaffinity-required.yaml  pod/pod-podaffinity-required created [root@master ~]# kubectl get pod pod-podaffinity-required -n dev -o wide --show-labels NAME                       READY   STATUS    RESTARTS   AGE   IP            NODE    NOMINATED NODE   READINESS GATES   LABELS pod-podaffinity-required   1/1     Running   0          11s   10.244.2.40   node1   <none>           <none>            <none>

 

podantiaffinity

podaniaffinity主要实现以运行的pod为参照,让新创建的pod跟参照pod不在一个区域中的功能

它的配置方式和选项跟podaffinity是一样的,这里不再做详细解释,直接做一个测试案例

继续使用上个案例中目标pod

[root@master ~]# kubectl get pod -n dev -o wide --show-labels NAME                       READY   STATUS    RESTARTS   AGE   IP            NODE    NOMINATED NODE   READINESS GATES   LABELS pod-podaffinity-required   1/1     Running   2          24h   10.244.2.57   node1   <none>           <none>            <none> pod-podaffinity-target     1/1     Running   2          24h   10.244.2.56   node1   <none>           <none>            podenv=pro

创建pod-podantiaffinity-required.yaml,内容如下

apiVersion: v1 kind: Pod metadata:    name: pod-podantiaffinity-required   namespace: dev spec:   containers:   - name: nginx     image: nginx:1.17.1   affinity:   #亲和性设置     podAntiAffinity:   #设置pod反亲和性       requiredDuringSchedulingIgnoredDuringExecution:  #硬限制       -  labelSelector:            matchExpressions: #匹配env的值在["pro"]中的标签            - key: podenv              operator: In              values: ["pro"]          topologyKey: kubernetes.io/hostname

使用配置文件

[root@master ~]# vim pod-podantiaffinity-required.yaml [root@master ~]# kubectl create -f pod-podantiaffinity-required.yaml  pod/pod-podantiaffinity-required created [root@master ~]# kubectl get pod -n dev -o wide NAME                           READY   STATUS    RESTARTS   AGE   IP            NODE    NOMINATED NODE   READINESS GATES pod-podaffinity-required       1/1     Running   2          24h   10.244.2.57   node1   <none>           <none> pod-podaffinity-target         1/1     Running   2          24h   10.244.2.56   node1   <none>           <none> pod-podantiaffinity-required   1/1     Running   0          10s   10.244.1.57   node2   <none>           <none>

可以发现pod被调度到了pod2中

 

污点和容忍

污点(Taints)

前面的调度方式都是站在pod的角度上,通过在pod上添加属性,来确定pod是否要调度到指定的node上,其实我们也可以站在node的角度上,通过在node上添加

污点属性,来决定是否允许pod调度过来

node被设置上污点之后就和pod之间存在了一种相斥的关系,进而拒绝pod调度进来,甚至可以将已经存在的pod驱逐出去

污点的格式为:key=value:effect,key和value是污点的标签,effect描述污点的作用,支持如下三个选项:

  • PreferNoSchedule:k8s将尽量避免把pod调度到具有该污点的node上,除非没有其他节点可以调度
  • NoSchedule:k8s将不会把pod调度到具有该污点的node上,但不会影响当前node上已经存在的pod
  • NoExecute:k8s将不会把Pod调度到具有该污点的node上,同时也会将Node上已存在的Pod驱离

k8s之pod调度

使用kubectl设置和去除污点的命令如下:

#设置污点 kubectl taint nodes nodeName key=value:effect  #去除污点 kubectl taint nodes nodeName key:effect-  #去除所有污点 kubectl taint nodes nodeName key-

 接下来,演示下污点的效果:

  1. 准备节点node1(为了演示效果更加明显,暂时停止node2节点)
  2. 为node1节点设置一个污点:tag=ayanami:PreferNoSchedule;然后创建pod1(pod1可以)
  3. 修改为node1节点设置一个污点:tag=ayanami:NoSchedule;然后创建pod2(pod1正常 pod2失败)
  4. 修改为node1节点设置一个污点:tag=ayanami:NoExecute;然后创建pod3(3个pod都失败)

 为node1设置污点(PreferNoSchedule):

[root@master ~]# kubectl taint nodes node1 tag=ayanami:PreferNoSchedule node/node1 tainted

 创建pod1

[root@master ~]# kubectl run taint1 --image=nginx:1.17.1 -n dev kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.  [root@master ~]# kubectl get pod -n dev NAME                      READY   STATUS    RESTARTS   AGE taint1-766c47bf55-lhmcj   1/1     Running   0          6m16s

为node1设置污点(取消PreferNoSchedule设置为NoSchedule)

[root@master ~]# kubectl taint nodes node1 tag:PreferNoSchedule- node/node1 untainted [root@master ~]# kubectl taint nodes node1 tag=ayanami:NoSchedule node/node1 tainted

再次查看pod,发现没有变化

[root@master ~]# kubectl get pod -n dev NAME                      READY   STATUS    RESTARTS   AGE taint1-766c47bf55-lhmcj   1/1     Running   0          10m

创建新的taint2并查看

[root@master ~]# kubectl run taint2 --image=nginx:1.17.1 -n dev kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead. deployment.apps/taint2 created [root@master ~]# kubectl get pod -n dev NAME                      READY   STATUS    RESTARTS   AGE taint1-766c47bf55-lhmcj   1/1     Running   0          11m taint2-84946958cf-h9765   0/1     Pending   0          15s

发现新的pod无法running,查看taint2

[root@master ~]# kubectl describe pod taint2 -n dev Events:   Type     Reason            Age        From               Message   ----     ------            ----       ----               -------   Warning  FailedScheduling  <unknown>  default-scheduler  0/3 nodes are available: 3 node(s) had taints that the pod didn't tolerate.   Warning  FailedScheduling  <unknown>  default-scheduler  0/3 nodes are available: 3 node(s) had taints that the pod didn't tolerate.

为node1设置污点(取消NoSchedule,设置为NoExecute)

[root@master ~]# kubectl taint node node1 tag:NoSchedule- node/node1 untainted [root@master ~]# kubectl taint node node1 tag=ayanami:NoExecute node/node1 tainted [root@master ~]# kubectl get pod -n dev NAME                      READY   STATUS    RESTARTS   AGE taint1-766c47bf55-fdtqw   0/1     Pending   0          30s taint2-84946958cf-26rfx   0/1     Pending   0          30s

发现两个pod都停止了,再创建一个taint3

[root@master ~]# kubectl run taint3 --image=nginx:1.17.1 -n dev kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead. [root@master ~]# kubectl get pod -n dev NAME                      READY   STATUS    RESTARTS   AGE taint1-766c47bf55-fdtqw   0/1     Pending   0          97s taint2-84946958cf-26rfx   0/1     Pending   0          97s taint3-57d45f9d4c-68pwr   0/1     Pending   0          9s

发现新的也创建不了了

拓展:

使用kubeadm搭建的集群,默认就会给master节点添加一个污点标记,所以pod就不会调度到master节点上

[root@master ~]# kubectl describe node master Taints:             node-role.kubernetes.io/master:NoSchedule

容忍

上面介绍了污点的作用,我们可以在node上添加污点用于拒绝pod调度上来,但是如果就是想将一个pod调度到一个有污点的node上去,这时候应该怎么做呢?这就要用到容忍

k8s之pod调度

污点就是拒绝,容忍就是忽略,node通过污点拒绝pod调度上去,pod通过容忍忽略拒绝

下面先通过案例看下效果:

  1. 上一小节,已经在node1节点上打了NoExecute的污点,此时pod是调度不上去的
  2. 本小节,可以通过给pod添加容忍,然后将其调度上去

创建pod-toleration.yaml,内容如下

apiVersion: v1 kind: Pod metadata:    name: pod-toleration   namespace: dev spec:   containers:   - name: nginx     image: nginx:1.17.1   tolerations:    #添加容忍   - key: "tag"   #要容忍的污点的key     operator: "Equal"   #操作符     value: "ayanami"   #容忍的污点的value     effect: "NoExecute"   #添加容忍的规则,这里必须和标记的污点规则相同

使用配置文件

[root@master ~]# vim pod-toleration.yaml [root@master ~]# kubectl create -f pod-toleration.yaml  pod/pod-toleration created [root@master ~]# kubectl get pod -n dev NAME                      READY   STATUS    RESTARTS   AGE pod-toleration            1/1     Running   0          9s taint1-766c47bf55-fdtqw   0/1     Pending   0          34m taint2-84946958cf-26rfx   0/1     Pending   0          34m taint3-57d45f9d4c-68pwr   0/1     Pending   0          33m

容忍的详细配置

key:对应着要容忍的污点的键,空值意味着所有的键 value:意味着要容忍的污点的值 operator:key-value的运算符,支持Equal和Exists(默认) effect:对应污点的effect,空值意味着匹配所有的影响 tolerationSeconds:容忍时间,当effect为NoExecute时生效,表示pod再Node上的停留时间