抖音春晚活动背后的 Service Mesh 流量治理技术

{"type":"doc","content":[{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"背景与挑战"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"2021 年的央视春晚红包项目留给业务研发同学的时间非常少,他们需要在有限的时间内完成相关代码的开发测试以及上线。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"整个项目涉及到不同的技术团队,自然也会涉及众多的微服务。这些微服务有各自的语言技术栈,包括 Go,C++,Java,Python,Node 等,同时又运行在非常复杂的环境中,比如容器、虚拟机、物理机等。这些微服务在整个抖音春晚活动的不同阶段,可能又需要使用不同的流量治理策略来保证稳定性。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"因此基础架构就需要为这些来自不同团队、用不同语言编写的微服务提供统一的流量治理能力。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"传统微服务架构的应对"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"说到微服务,我们先来看一下传统的微服务架构是怎么解决这些问题的。随着企业组织的不断发展,产品的业务逻辑日渐复杂,为了提升产品的迭代效率,互联网软件的后端架构逐渐从单体的大服务演化成了分布式微服务。分布式架构相对於单体架构,其稳定性和可观测性要差一些。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"为了提升这些点,我们就需要在微服务框架上实现很多功能。例如:"}]},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"微服务需要通过相互调用来完成原先单体大服务所实现的功能,这其中就涉及到相关的"},{"type":"text","marks":[{"type":"strong"}],"text":"网络通信"},{"type":"text","text":",以及网络通信带来的"},{"type":"text","marks":[{"type":"strong"}],"text":"请求的序列化、响应的反序列化"},{"type":"text","text":"。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"服务间的相互调用涉及"},{"type":"text","marks":[{"type":"strong"}],"text":"服务发现"},{"type":"text","text":"。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"分布式的架构可能需要不同的"},{"type":"text","marks":[{"type":"strong"}],"text":"流量治理策略"},{"type":"text","text":"来保证服务之间相互调用的稳定性。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"微服务架构下还需要提升可观测性能力,包括日志、监控、Tracing 等。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"通过实现以上这些功能,微服务架构也能解决前面提到的一些问题。但是微服务本身又存在一些问题:"}]},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在多语言的微服务框架上实现多种功能,涉及的"},{"type":"text","marks":[{"type":"strong"}],"text":"开发和运维成本非常高"},{"type":"text","text":";"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"微服务框架上一些新 Feature 的交付或者版本召回,需要业务研发同学配合进行相关的改动和发布上线,会造成微服务框架的"},{"type":"text","marks":[{"type":"strong"}],"text":"版本长期割裂不受控"},{"type":"text","text":"的现象。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"那我们怎么去解决这些问题呢?在软件工程的领域有这样一句话:任何问题都可以通过增加一个中间层去解决。而针对我们前面的问题,业界已经给出了答案,这个中间层就是 Service Mesh(服务网格)。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"自研 Service Mesh 实现"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"下面就给大家介绍一下火山引擎自研 Service Mesh 的实现。先看下面这张架构图。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/c0\/c0f9297a04e8d7482c5b8510ba98ed48.png","alt":"图片","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"图中蓝色矩形的 Proxy 节点是 Service Mesh 的数据面,它是一个单独的进程,和运行着业务逻辑的 Service 进程部署在同样的运行环境(同一个容器或同一台机器)中。由这个 Proxy 进程来代理流经 Service 进程的所有流量,前面提到的需要在微服务框架上实现的服务发现、流量治理策略等功能就都可以由这个数据面进程完成。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"图中的绿色矩形是 Service Mesh 的控制面。我们需要执行的路由流量、治理策略是由这个控制面决定的。它是一个部署在远端的服务,由它和数据面进程下发一些流量治理的规则,然后由数据面进程去执行。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"同时我们也可以看到数据面和控制面是与业务无关的,其发布升级相对独立,不需要通知业务研发同学。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"基于这样的架构就可以解决前文提到的一些问题:"}]},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我们不需要把微服务框架众多的功能在每种语言上都实现一遍,只需要在 Service Mesh 的数据面进程中实现即可;"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"同时由数据面进程屏蔽各种复杂的运行环境,Service 进程只需要和数据面进程通讯即可;"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"各种灵活多变的流量治理策略也都可以由 Service Mesh 的进程控制面服务进行定制。"}]}]}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"Service Mesh 流量治理技术"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"接下来给大家介绍我们的 Service Mesh 实现具体提供了哪些流量治理技术来保障微服务在面对抖音春晚活动的流量洪峰时能够有一个比较稳定的表现。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"首先介绍一下流量治理的核心:"}]},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"路由"},{"type":"text","text":":流量从一个微服务实体出发,可能需要进行一些服务发现或者通过一些规则流到下一个微服务。这个过程可以衍生出很多流量治理能力。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"安全"},{"type":"text","text":":流量在不同的微服务之间流转时,需要通过身份认证、授权、加密等方式来保障流量内容是安全、真实、可信的。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"控制"},{"type":"text","text":":在面对不同的场景时,用动态调整治理策略来保障微服务的稳定性。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"可观测性"},{"type":"text","text":":这是比较重要的一点,我们需要对流量的状态加以记录、追踪,并配合预警系统及时发现并解决问题。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/12\/12d1fad56f847bdf9bd558f8c7224abf.png","alt":"图片","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"以上的四个核心方面配合具体的流量治理策略,可以提升微服务的稳定性,保障流量内容的安全,提升业务同学的研发效率,同时在面对黑天鹅事件的时候也可以提升整体的容灾能力。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"下面我们继续来看一下 Service Mesh 技术具体都提供了哪些流量治理策略来保障微服务的稳定性。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"稳定性策略——熔断"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"首先是熔断。在微服务架构中,单点故障是一种常态。当出现单点故障的时候,如何保障整体的成功率是熔断需要解决的问题。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"熔断可以从客户端的视角出发,记录从服务发出的流量请求到达下游中每一个节点的成功率。当请求达到下游的成功率低于某一阈值,我们就会对这个节点进行熔断处理,使得流量请求不再打到故障节点上。"}]},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/2d\/2dcb5f7da46fbfe3191a29cd46a2c70c.png","alt":"图片","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"当故障节点恢复的时候,我们也需要一定的策略去进行熔断后的恢复。比如可以尝试在一个时间周期内发送一些流量打到这个故障节点,如果该节点仍然不能提供服务,就继续熔断;如果能够提供服务了,就逐渐加大流量,直到恢复正常水平。通过熔断策略,可以容忍微服务架构中个别节点的不可用,并防止进一步恶化带来的雪崩效应。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"稳定性策略——限流"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"另外一个治理策略是限流。限流是基于这样的一个事实:Server 在过载状态下,其请求处理的成功率会降低。比如一个 Server 节点正常情况下能够处理 2000 QPS,在过载情况下(假设达到 3000 QPS),这个 Server 就只能处理 1000 QPS 甚至更低。限流可以主动 drop 一些流量,使得 Server 本身不会过载,防止雪崩效应。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"稳定性策略——降级"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"当 Server 节点进一步过载,就需要使用降级策略。降级一般有两种场景:"}]},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"一种是按照比例丢弃流量。比如从 A 服务发出到 B 服务的流量,可以按照一定的比例(20% 甚至更高)丢弃。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"另外一种是旁路依赖的降级。假设 A 服务需要依赖 B、C、D 3 个服务,D 是旁路,可以把旁路依赖 D 的流量掐掉,使得释放的资源可以用于核心路径的计算,防止进一步过载。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/12\/12b61049b8deeb90a0244513039c59a8.png","alt":"图片","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"稳定性策略——动态过载保护"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"熔断、限流、降级都是针对错误发生时的治理策略,其实最好的策略是防患于未然,也就是接下来要介绍的动态过载保护。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"前面提到了限流策略很难确定阈值,一般是通过压测去观测一个节点能够承载的 QPS,但是这个上限量级可能会由于运行环境的不同,在不同节点上的表现也不同。动态过载保护就是基于这样一个事实:资源规格相同的服务节点,处理能力不一定相同。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"如何实现动态过载保护?它分为三个部分:过载检测,过载处理,过载恢复。其中最关键的是如何判断一个 Server 节点是否过载。"}]},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/39\/3975433e599b29a95e8e0c480c353add.png","alt":"图片","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"上图中的 Ingress Proxy 是 Service Mesh 的数据面进程,它会代理流量并发往 Server 进程。图中的 T3 可以理解为从 Proxy 进程收到请求到 Server 处理完请求后返回的时间。这个时间是否可以用来判断过载?答案是不能,因为 Server 有可能依赖于其他节点。有可能是其他节点的处理时间变长了,导致 Server 的处理时间变长,这时 T3 并不能反映 Server 是处于过载的状态。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"图中 T2 代表的是数据面进程把请求转发到 Server 后,Server 真正处理到它的时间间隔。T2 能否反映过载的状态?答案是可以的。为什么可以?举一个例子,假设 Server 的运行环境是一个 4 核 8g 的实例,这就决定了该 Server 最多只能同时处理 4 个请求。如果把 100 个请求打到该 Server,剩余的 96 个请求就会处于 pending 的状态。当 pending 的时间过长,我们就可以认为是过载了。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"检测到 Server 过载之后应当如何进行处理?针对过载处理也有很多策略,我们采用的策略是根据请求的优先级主动 drop 低优的请求,以此来缓解 Server 过载的情况。当 drop 了一些流量后 Server 恢复了正常水平,我们就需要进行相应的过载恢复,使得 QPS 能够达到正常状态。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"这个过程是如何体现动态性的?过载检测是一个实时的过程,它有一定的时间周期。在每一个周期内,当检测到 Server 是过载的状态,就可以慢慢根据一定比例 drop 一些低优请求。在下一个时间周期,如果检测到 Server 已经恢复了,又会慢慢调小 drop 的比例,使 Server 逐渐恢复。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"动态过载保护的效果是非常明显的:它可以保证服务在大流量高压的情况下不会崩溃,该策略也广泛地应用于抖音春晚红包项目中的一些大服务。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"稳定性策略——负载均衡"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"接下来我们看一下负载均衡策略。假设有一个服务 A 发出的流量要达到下游服务 B,A 和 B 都有一万个节点,我们如何保障从 A 出发的流量达到 B 中都是均衡的?做法其实有很多,比较常用的是随机轮询、加权虚机、加权轮询,这些策略其实看名字就能知道是什么意思了。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"另一种比较常见的策略是一致性哈希。哈希是指根据请求的一些特征使得请求一定会路由到下游中的相同节点,将请求和节点建立起映射关系。一致性哈希策略主要应用于缓存敏感型服务,可以大大提升缓存的命中率,同时提升 Server 性能,降低超时的错误率。当服务中有一些新加入的节点,或者有一些节点不可用了,哈希的一致性可以尽可能少地影响已经建立起的映射关系。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"还有很多其他的负载均衡策略,在生产场景中的应用范围并不是很广泛,这里不再赘述。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"稳定性策略——节点分片"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"面对抖音春晚红包这种超大流量规模的场景,还有一个比较有用的策略是节点分片。节点分片基于这样一个事实:节点多的微服务,其长连接的复用率是非常低的。因为微服务一般是通过 TCP 协议进行通信,需要先建立起 TCP 连接,流量流转在 TCP 连接上。我们会尽可能地复用一个连接去发请求搜响应,以避免因频繁地进行连接、关闭连接造成的额外开销。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"当节点规模非常大的时候,比如说 Service A 和 Service B 都有 1 万个节点,它们就需要维持非常多的长连接。为避免维持这么多长连接,通常会设置一个 idle timeout 的时间,当一个连接在一定的间隔内没有流量经过的时候,这个连接就会被关掉。在服务节点规模非常大的场景下,长连接退化成的短连接,会使得每一个请求都需要建立连接才能进行通讯。它带来的影响是:"}]},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"连接超时带来的错误。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"性能会有所降低。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"解决这个问题可以使用节点分片的策略。实际上我们在抖音春晚红包的场景中也是非常广泛地使用了这个策略。这个策略对节点数较多的服务进行节点分片,然后建立起一种映射关系,使得如下图中所示的 A 服务的分片 0 发出的流量一定能到达 service B 的分片 0。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/77\/7792539cf75f38d53fb522e883a0f077.png","alt":"图片","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"这样就可以大大提升长连接的复用率。对于原先 10000"},{"type":"text","marks":[{"type":"italic"}],"text":"10000 的对应关系,现在就变成了一个常态的关系,比如 100"},{"type":"text","text":"100。我们通过节点分片的策略大大提升了长连接的复用率,降低了连接超时带来的错误,并且提升了微服务的性能。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"效率策略"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"前面提到的限流、熔断、降级、动态过载保护、节点分片都是提升微服务稳定性相关的策略,还会有一些与效率相关的策略。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我们先介绍一下泳道和染色分流的概念。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/b9\/b9619cfb93b777d0b2320a4078d5150a.png","alt":"图片","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"上图中所示的某个功能可能涉及到 a、b、c、d、e、f 六个微服务。泳道可以对这些流量进行隔离,每一个泳道内完整地拥有这六个微服务,它们可以完整的完成一个功能。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"染色分流是指根据某些规则使得流量打到不同的泳道,然后借此来完成一些功能,这些功能主要包括:"}]},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"Feature 调试"},{"type":"text","text":":在线上的开发测试过程中,可以把个人发出的一些请求打到自己设置的泳道并进行 Feature 调试。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"故障演练"},{"type":"text","text":":在抖音春晚活动的一些服务开发完成之后,需要进行演练以对应对不同的故障。这时我们就可以把压测流量通过一些规则引流到故障演练的泳道上。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"流量录制回放"},{"type":"text","text":":把某种规则下的流量录制下来,然后进行相关回放,主要用于 bug 调试或在某些黑产场景下发现问题。"}]}]}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"安全策略"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"安全策略也是流量治理的重要环节。我们主要提供三种安全策略:"}]},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"授权"},{"type":"text","text":":授权是指限定某一个服务能够被哪些服务调用。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"鉴权"},{"type":"text","text":":当一个服务接收到流量时,需要鉴定流量来源的真实性。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"双向加密(mTLS)"},{"type":"text","text":" :为了防止流量内容被窥探、篡改或被攻击,需要使用双向加密。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"通过以上的这些策略,我们提供了可靠的身份认证,安全地传输加密,还可以防止传输的流量内容被篡改或攻击。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"春晚红包场景落地"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"通过前面提到的各种策略,我们可以大大提升微服务的稳定性以及业务研发的效率。但是当我们落地这一套架构的时候也会遇到一些挑战,最主要的挑战是性能问题。我们知道,通过增加一个中间层,虽然提升了扩展性和灵活性,但同时也必然有一些额外的开销,这个开销就是性能。在没有 Service Mesh 时,微服务框架的主要开销来自于序列化与反序列化、网络通讯、服务发现以及流量治理策略。使用了 Service Mesh 之后,会多出两种开销:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"协议解析"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"对于数据面进程代理的流量,需要对流量的协议进行一定的解析才能知道它从哪来到哪去。但是协议解析本身的开销非常高,所以我们通过增加一个 header (key 和 value 的集合) 可以把流量的来源等服务元信息放到这个 header 里,这样只需要解析一两百字节的内容就可以完成相关的路由。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"进程间通讯"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"数据面进程会代理业务进程的流量,通常是通过 iptables 的方式进行。这种方案的 overhead 非常高,所以我们采用了进程间通讯的方式,通过和微服务框架约定一个 unix domain socket 地址或者一个本地的端口,然后进行相关的流量劫持。虽然这种方式相对于 iptables 会有一些性能提升,它本身也存在的额外的一些开销。"}]},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/66\/660faae50a91c10d6d0bb14f7160da7a.png","alt":"图片","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我们是如何降低进程间通讯开销的呢?在传统的进程间通讯里,比如像 unix domain socket 或者本地的端口,会涉及到传输的内容在用户态到内核态的拷贝。比如请求转发给数据面进程会涉及到请求在用户态和内核态之间拷贝,数据面进程读出来的时候又会涉及内核态到用户态的拷贝,那么一来一回就会涉及到多达 4 次的内存拷贝。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我们的解决方案是通过"},{"type":"text","marks":[{"type":"strong"}],"text":"共享内存"},{"type":"text","text":"来完成的。共享内存是 Linux 下最高性能的一种进程间通讯方式,但是它没有相关的通知机制。当我们把请求放到共享内存之后,另外一个进程并不知道有请求放了进来。所以我们需要引入一些事件通知的机制,让数据面进程知道。我们通过 unix domain socket 完成了这样一个过程,它的效果是可以减少内存的拷贝开销。同时我们在共享内存中引用了一个队列,这个队列可以批量收割 IO,从而减少了系统的调用。它起到的效果也是非常明显的,在抖音春晚活动的一些风控场景下,性能可以提高 24%。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"完成这些优化之后,要去落地的阻力就没那么大了。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"总结"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"本次分享主要为大家介绍了 Service Mesh 技术能够提供哪些流量治理能力来保证微服务的稳定和安全。主要包括三个核心点:"}]},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"稳定"},{"type":"text","text":":面对瞬时亿级 QPS 的流量洪峰, 通过 Service Mesh 提供的流量治理技术,保证微服务的稳定性。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"安全"},{"type":"text","text":":通过 Service Mesh 提供的安全策略,保证服务之间的流量是安全可信的。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"高效"},{"type":"text","text":":春晚活动涉及众多不同编程语言编写的微服务,Service Mesh 天然为这些微服务提供了统一的流量治理能力,提升了开发人员的研发效率。"}]}]}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"Q&A"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"Q"},{"type":"text","text":":共享内存中的 IPC 通信为什么能够减少系统调用?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"A"},{"type":"text","text":":当客户端进程把一个请求放到共享内存中之后,我们需要通知 Server 进程进行处理,会有一个唤醒的操作,每次唤醒意味着一个系统调用。当 Server 还没有被唤醒的时候,或者它正在处理请求时,下一个请求到来了,就不需要再执行相同的唤醒操作,这样就使得在请求密集型的场景下我们不需要去频繁的唤醒,从而起到降低系统调用的效果。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"Q"},{"type":"text","text":":自研 Service Mesh 实现是纯自研还是基于 Istio 等社区产品?如果是自研使用的是 Go 还是 Java 语言?数据面用的是 Envoy 么?流量劫持用的 iptables 么?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"A"},{"type":"text","text":":"}]},{"type":"numberedlist","attrs":{"start":null,"normalizeStart":1},"content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":1,"align":null,"origin":null},"content":[{"type":"text","text":"数据面是基于 Envoy 进行二次开发的,语言使用 C++。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":2,"align":null,"origin":null},"content":[{"type":"text","text":"流量劫持用与微服务框架约定好的的 uds 或者本地端口,不用 iptables。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":3,"align":null,"origin":null},"content":[{"type":"text","text":"Ingess Proxy 和业务进程部署在同样的运行环境里,发布升级不需要重启容器。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":4,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":4,"align":null,"origin":null},"content":[{"type":"text","text":"本文转载自:字节跳动技术团队(ID:toutiaotechblog)"}]},{"type":"paragraph","attrs":{"indent":0,"number":4,"align":null,"origin":null},"content":[{"type":"text","text":"原文链接:"},{"type":"link","attrs":{"href":"https:\/\/mp.weixin.qq.com\/s\/dA9bTQm0llLyy-AbE3gO-w","title":"xxx","type":null},"content":[{"type":"text","text":"抖音春晚活动背后的 Service Mesh 流量治理技术"}]}]}]}
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章