作业帮 Kubernetes 原生调度器优化实践

{"type":"doc","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"调度系统的本质是为计算服务或任务匹配合适的资源,使其能够稳定高效地运行,以及在此基础上进一步提高资源使用密度,而影响应用运行的因素非常多,比如 CPU、内存、IO、差异化的资源设备等一系列因素都会影响应用运行的表现。同时,单独和整体的资源请求、硬件 \/ 软件 \/ 策略限制、 亲和性要求、数据区域、负载间的干扰等因素以及周期性流量场景、计算密集场景、在离线混合等不同应用场景的交织也带来了决策上的很多变化。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"调度器的目标则是快速准确地实现这一能力,但快速和准确这两个目标在资源有限的场景下往往会产生矛盾,这需要在二者间权衡,本文主要分享了作业帮在实际应用 K8s 过程中遇到的问题以及最终探讨出的解决方案,希望对广大开发者有所帮助。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"调度器原理和设计"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"K8s 默认调度器的整体工作框架可以简单用下图概括:"}]},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/5b\/5bd8c4755399fdc7ab049a99e60bb15d.webp","alt":"图片","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"两个控制循环"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"1、第一个控制循环称为 Informer Path,主要工作是启动一系列 Informer,用来监听(Watch)集群中 Pod、Node、Service 等与调度相关的 API 对象的变化。比如,当一个待调度 Pod 被创建出来之后,调度器就会通过 Pod Informer 的 Handler,将这个待调度 Pod 添加进调度队列;同时,调度器还要负责对调度器缓存 Scheduler Cache 进行更新,并以这个 cache 为参考信息,来提高整个调度流程的性能。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"2、第二个控制循环即为对 pod 进行调度的主循环,称为 Scheduling Path。这一循环的工作流程是不断地从调度队列中取出待调度的 pod,运行两个步骤的算法,来选出最优 node"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在集群的所有节点中选出所有“可以”运行该 pod 的节点,这一步被称为 Predicates;"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在上一步选出的节点中,根据一系列优选算法对节点打分,选出“最优”即得分最高的节点,这一步被称为 Priorities。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"调度完成之后,调度器就会为 pod 的 spec.NodeName 赋值这个节点,这一步称为 Bind。而为了不在主流程路径中访问 Api Server 影响性能,调度器只会更新 Scheduler Cache 中的相关 pod 和 node 信息:这种基于乐观假设的 API 对象更新方式,在 K8s 中称为 Assume。之后才会创建一个 goroutine 来异步地向 API Server 发起更新 Bind 操作,这一步就算失败了也没有关系,Scheduler Cache 更新后就会一切正常。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"大规模集群调度带来的问题和挑战"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"K8s 默认调度器策略在小规模集群下有着优异表现,但是随着业务量级的增加以及业务种类的多样性变化,默认调度策略则逐渐显露出局限性:调度维度较少,无并发,存在性能瓶颈,以及调度器越来越复杂。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"迄今为止,我们当前单个集群规模节点量千级,pod 量级则在 10w 以上,整体资源分配率超过 60%,其中更是包含了 GPU、在离线混合部署等复杂场景。在这个过程中,我们遇到了不少调度方面的问题。"}]},{"type":"heading","attrs":{"align":null,"level":4},"content":[{"type":"text","text":"问题 1:高峰期的节点负载不均匀"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"默认调度器,参考的是 workload 的 request 值,如果我们针对 request 设置的过高,会带来资源浪费;过低则有可能带来高峰期 CPU 不均衡差异严重的情况;使用亲和策略虽然可以一定程度避免这种,但是需要频繁填充大量的策略,维护成本就会非常大。而且服务的 request 往往不能体现服务真实的负载,带来差异误差。而这种差异误差,会在高峰时体现到节点负载不均上。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"实时调度器,在调度的时候获取各节点实时数据来参与节点打分,但是实际上实时调度在很多场景并不适用,尤其是对于具备明显规律性的业务来说,比如我们大部分服务晚高峰流量是平时流量的几十倍,高低峰资源使用差距巨大,而业务发版一般选择低峰发版,采用实时调度器,往往发版的时候比较均衡,到晚高峰就出现节点间巨大差异,很多实时调度器往往在出现巨大差异的时候会使用再平衡策略来重新调度,高峰时段对服务 POD 进行迁移,服务高可用角度来考虑是不现实的。显然,实时调度是远远无法满足业务场景的。"}]},{"type":"heading","attrs":{"align":null,"level":4},"content":[{"type":"text","text":"我们的方案:高峰预测时调度"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"针对这种情况,需要预测性调度方案,根据以往高峰时候 CPU、IO、网络、日志等资源的使用量,通过对服务在节点上进行最优排列组合回归测算,得到各个服务和资源的权重系数,基于资源的权重打分扩展,也就是使用过去高峰数据来预测未来高峰节点服务使用量,从而干预调度节点打分结果。"}]},{"type":"heading","attrs":{"align":null,"level":4},"content":[{"type":"text","text":"问题 2:调度维度多样化"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"随着业务越来越多样,需要加入更多的调度维度,比如日志。由于采集器不可能无限速率采集日志且日志采集是基于节点维度。需要平衡日志采集速率,各个节点差异不可过大。部分服务 CPU 使用量一般但是日志输出量很大,而日志并不属于默认调度器决策的一环,所以当这些日志量很大的多个服务 pod 在同一个节点上时,该机器上的日志上报就有可能出现部分延迟。"}]},{"type":"heading","attrs":{"align":null,"level":4},"content":[{"type":"text","text":"我们的方案:补全调度决策因子"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"该问题显然需要对调度决策补全,我们扩展了预测调度打分策略,添加了日志的决策因子,将日志也作为节点的一种资源,并根据历史监控获取到服务对应的日志使用量来计算分数。"}]},{"type":"heading","attrs":{"align":null,"level":4},"content":[{"type":"text","text":"问题 3:大批量服务扩缩带来的调度时延"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"随着业务复杂度进一步上升,在高峰时段出现,会有大量定时任务和集中大量弹性扩缩,大批量(上千 POD)同时调度导致调度时延上涨,这两者对调度时间比较敏感,尤其对于定时任务来说,调度延时的上涨会被明显感知到,原因是 K8s 调度 pod 本身是对集群资源的分配,反应在调度流程上则是预选和打分阶段是顺序进行的。如此一来,当集群规模大到一定程度时,大批量更新就会出现可感知的 pod 调度延迟。"}]},{"type":"heading","attrs":{"align":null,"level":4},"content":[{"type":"text","text":"我们的方案:拆分任务调度器,加大并发调度域、批量调度"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"解决吞吐能力低下最直接的方法就是串行改并行,对于资源抢占场景,尽量细化资源域,资源域之间并行。基于以上策略,我们拆分出了独立的 Job 调度器,同时使用 Serverless 作为 Job 运行的底层资源。K8s Serverless 为每一个 Job POD 单独申请了独立的 POD 运行 sanbox,也就是任务调度器,完整并行。以下为对比图:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/2d\/2dbfa4a1562ec5d1e10cdc88b6ba6a0b.webp","alt":"图片","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"原生调度器在晚高峰下节点 CPU 使用率"}]},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/cf\/cfbdaa6b34ebba6deca8097cb083493b.webp","alt":"图片","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"优化后调度器在晚高峰下节点 CPU 使用率"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"总结"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Work 节点资源、GPU 资源、Serverless 资源是我们集群异构资源的三类资源域,这三种资源上运行的服务存在天然差异,我们使用 forecast-scheduler、gpu-scheduler、job-schedule 三个调度器来管理这三种资源域上的 Pod 调度情况。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"预测调度器管理大部分在线业务,其中扩展了资源维度,添加了预测打分策略。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"GPU 调度器管理 GPU 资源机器的分配,运行在线推理和离线训练,两者的比例处于长期波动中,高峰期间离线训练缩容、在线推理扩容;非高峰期间离线训练扩容、在线推理缩容;同时处理一些离线图片任务来复用 GPU 机器上比较空闲的 CPU 等资源。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Job 调度器负责管理定时任务调度,定时任务量大且创建销毁频繁,资源使用非常碎片化,而且对时效性要求更高;所以我们将任务尽量调度到 Serverless 服务上,压缩集群中为了能容纳大量任务而冗余的机器资源,提升资源利用率。"}]}]}]},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/2b\/2b3836e05650199627daaccec925c480.webp","alt":"图片","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"未来演进探讨"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"更细粒度的资源域划分,将资源域划分至节点级别,节点级别加锁来进行。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"资源抢占和重调度。正常场景下,当一个 Pod 调度失败,这个 Pod 会保持在 pending 的状态,等待 Pod 更新或者集群资源发生变化进行重新调度,但是 K8s 调度器依然存在一个抢占功能,可以使得高优先级 Pod 在调度失败时,挤走某个节点上的部分低优先级 Pod 以保证高优先级 Pod 的正常运行,迄今为止我们并没有使用调度器的抢占能力,即使我们通过以上多种策略来加强调度的准确性,但依然无法避免部分场景下由于业务带来的不均衡情况,这种非正常场景中,重调度的能力就有了用武之地,也许重调度将会成为日后针对异常场景的一种自动修复方式。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"作者介绍:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"吕亚霖,作业帮基础架构 - 架构研发团队负责人。2019 年加入作业帮,负责技术中台和基础架构工作。在作业帮期间主导了云原生架构演进、推动实施容器化改造、服务治理、GO 微服务框架、DevOps 的落地实践。"}]}]}
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章