[TOC]
Pod 优先级和服务质量之间的相互作用
集群中为了保障核心服务正常运行,有时候会舍弃掉不那么核心的服务。在通过Kubernetes实现的时候发现有两个手段:
- 设置pod的服务质量。Guaranteed, Burstable, BestEffort
- 设置pod的优先级
可是当这两个发生冲突的时候,优先考虑谁呢?比如现在这种情况:
1 | 在资源不够用的情况下,一个高优先级,但是Qos属于Burstable或BestEffort的pod会抢占,低优先级但Qos属于Guaranteed的pod吗? |
挠头不?
进过查找资料,发现官方给出的答案是这样的:
其实这里是要分情况来看,简单来说是两个环节:pod调度环节 和 节点压力驱逐环节
pod调度环节
这里有最核心的一句话。
1 | The scheduler's preemption logic does not consider QoS when choosing preemption targets. |
调度器的抢占逻辑在选择抢占目标时不考虑 QoS。只看pod的优先级,所以无论Guaranteed, Burstable, BestEffort,只要你的优先级最低,并且能腾出空间给新的pod运行,那很可能就是你了。
节点压力驱逐环节
而对于节点压力驱逐环节可能会繁琐一点
1 | As a result, kubelet ranks and evicts pods in the following order: |
第一条:usage exceeds requests
第一条,第一点Qos
1 | BestEffort or Burstable pods where the usage exceeds requests. |
第一条,第二点Priority
1 | These pods are evicted based on their Priority |
第一条,第三点usage level
1 | and then by how much their usage level exceeds the request. |
第二条:usage is less than requests
1 | Guaranteed pods and Burstable pods where the usage is less than requests are evicted last, based on their Priority. |
第二条,第一点Qos
第二条,第二点Priority
顺序 | 条件 | 举例 | |
---|---|---|---|
1 | Qos: BestEffort Priority: 0 内存 request: 0 内存usage:>0 |
||
2 | Qos: BestEffort Priority: >0 内存 request: 0 内存usage:>0 |
Priority低的先驱逐,priority相同的,usage越高的先驱逐 | |
3 | Qos: Burstable Priority: 0 内存usage:>request |
||
4 | Qos: Burstable Priority: >0 内存usage:>request |
Priority低的先驱逐,priority相同的,usage越高的先驱逐 | |
5 | Qos: Burstable Priority: 0 内存usage:<request |
||
6 | Qos: Burstable Priority: >0 内存usage:<request |
Priority低的先驱逐,priority相同的,usage越高的先驱逐 | |
7 | Qos: Guaranteed Priority: 0 |
||
8 | Qos: Guaranteed Priority: 0 |
Priority低的先驱逐,priority相同的,request越高的先驱逐 | |
所以,你学废了吗?
参考: