技术实践 | 如何基于 Flink 实现通用的聚合指标计算框架

{"type":"doc","content":[{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"1 引言","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"网易云信作为一个 PaaS 服务,需要对线上业务进行实时监控,实时感知服务的“心跳”、“脉搏”、“血压”等健康状况。通过采集服务拿到 SDK、服务器等端的心跳埋点日志,是一个非常庞大且杂乱无序的数据集,而如何才能有效利用这些数据?服务监控平台要做的事情就是对海量数据进行实时分析,聚合出表征服务的“心跳”、“脉搏”、“血压”的核心指标,并将其直观的展示给相关同学。这其中核心的能力便是 :","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"实时分析和实时聚合","attrs":{}},{"type":"text","text":"。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在之前的《","attrs":{}},{"type":"link","attrs":{"href":"https://mp.weixin.qq.com/s/HG0ajeNWkqi8lwjoCmrLLQ","title":"","type":null},"content":[{"type":"text","text":"网易云信服务监控平台实践","attrs":{}}]},{"type":"text","text":"》一文中,我们围绕数据采集、数据处理、监控告警、数据应用 4 个环节,介绍了网易云信服务监控平台的整体框架。","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"本文是对网易云信在聚合指标计算逻辑上的进一步详述。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"基于明细数据集进行实时聚合,生产一个聚合指标,业界常用的实现方式是 Spark Streaming、Flink SQL / Stream API。不论是何种方式,我们都需要通过写代码来指定数据来源、数据清洗逻辑、聚合维度、聚合窗口大小、聚合算子等。如此繁杂的逻辑和代码,无论是开发、测试,还是后续任务的维护,都需要投入大量的人力/物力成本。而我们程序员要做的便是化繁为简、实现大巧不工。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"本文将阐述网易云信是如何基于 Flink 的 Stream API,实现一套通用的聚合指标计算框架。","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"2 整体架构","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/28/286506e060de26c8ac2ef4d4dfa84910.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"如上图所示,是我们基于 Flink 自研的聚合指标完整加工链路,其中涉及到的模块包括:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"source","attrs":{}},{"type":"text","text":":定期加载聚合规则,并根据聚合规则按需创建 Kafka 的 Consumer,并持续消费数据。","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"process","attrs":{}},{"type":"text","text":":包括分组逻辑、窗口逻辑、聚合逻辑、环比计算逻辑等。从图中可以看到,我们在聚合阶段分成了两个,这样做的目的是什么?其中的好处是什么呢?做过分布式和并发计算的,都会遇到一个共同的敌人:","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"数据倾斜。","attrs":{}},{"type":"text","text":" 在我们 PaaS 服务中头部客户会更加明显,所以倾斜非常严重,分成两个阶段进行聚合的奥妙下文中会详细说明。","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"sink","attrs":{}},{"type":"text","text":":是数据输出层,目前默认输出到 Kafka 和 InfluxDB,前者用于驱动后续计算(如告警通知等),后者用于数据展示以及查询服务等。","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"reporter","attrs":{}},{"type":"text","text":":全链路统计各个环节的运行状况,如输入/输出 QPS、计算耗时、消费堆积、迟到数据量等。","attrs":{}}]}]}],"attrs":{}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"下文将详细介绍这几个模块的设计和实现思路。","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"3 source","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"规则配置","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"为了便于聚合指标的生产和维护,我们将指标计算过程中涉及到的关键参数进行了抽象提炼,提供了可视化配置页面,如下图所示。下文会结合具体场景介绍各个参数的用途。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/94/947dfe2e0ab1c2de847bfb484703582b.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"规则加载","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在聚合任务运行过程中,我们会定期加载配置。如果检测到有","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"新增","attrs":{}},{"type":"text","text":"的 Topic,我们会创建 kafka-consumer 线程,接收上游实时数据流。同理,对于已经","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"失效","attrs":{}},{"type":"text","text":"的配置,我们会关闭消费线程,并清理相关的 reporter。","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"数据消费","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"对于数据源相同的聚合指标,我们共用一个 kafka-consumer,拉取到记录并解析后,对每个聚合指标分别调用 collect() 进行数据分发。如果指标的数据筛选规则(配置项","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"⑤","attrs":{}},{"type":"text","text":")非空,在数据分发前需要进行数据过滤,不满足条件的数据直接丢弃。","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"4 process","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"整体计算流程","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"基于 Flink 的 Stream API 实现聚合计算的核心代码如下所示:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"text"},"content":[{"type":"text","text":"SingleOutputStreamOperator aggResult = src\n .assignTimestampsAndWatermarks(new MetricWatermark())\n .keyBy(new MetricKeyBy())\n .window(new MetricTimeWindow())\n .aggregate(new MetricAggFuction());\n","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"MetricWatermark():根据指定的时间字段(配置项⑧)获取输入数据的 timestamp,并驱动计算流的 watermark 往前推进。","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"MetricKeyBy():指定聚合维度,类似于 MySQL 中 groupby,根据分组字段(配置项⑥),从数据中获取聚合维度的取值,拼接成分组 key。","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"MetricTimeWindow():配置项⑧中指定了聚合计算的窗口大小。如果配置了定时输出,我们就创建滑动窗口,否则就创建滚动窗口。","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"MetricAggFuction():实现配置项②指定的各种算子的计算,下文将详细介绍各个算子的实现原理。","attrs":{}}]}]}],"attrs":{}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"二次聚合","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"对于大数据量的聚合计算,","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"数据倾斜","attrs":{}},{"type":"text","text":"是不得不考虑的问题,数据倾斜意味着规则中配置的分组字段(配置项⑥)指定的聚合 key 存在热点。我们的计算框架在设计之初就考虑了如何解决数据倾斜问题,就是将聚合过程拆分成2阶段:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"第1阶段:将数据随机打散,进行预聚合。","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"第2阶段:将第1阶段的预聚合结果作为输入,进行最终的聚合。","attrs":{}}]}]}],"attrs":{}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"具体实现:判断并发度参数 parallelism(配置项⑦) 是否大于1,如果 parallelism 大于1,生成一个 [0, parallelism) 之间的随机数作为 randomKey,在第1阶段聚合 keyBy() 中,将依据分组字段(配置项⑥)获取的 key 与 randomKey 拼接,生成最终的聚合 key,从而实现了数据随机打散。","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"聚合算子","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"作为一个平台型的产品,我们提供了如下常见的聚合算子。由于采用了二次聚合逻辑,各个算子在第1阶段和第2阶段采用了相应的计算策略。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"对于计算结果受全部数据影响的算子,如 count-distinct(去重计数),常规思路是利用 set 的去重特性,将所有统计数据放在一个 Set 中,最终在聚合函数的 getResult 中输出 Set 的 size。如果统计数据量非常大,这个 Set 对象就会非常大,对这个 Set 的 I/O 操作所消耗的时间将不能接受。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"对于类 MapReduce 的大数据计算框架,性能的瓶颈往往出现在 shuffle 阶段大对象的 I/O 上,因为数据需要序列化 / 传输 / 反序列化,Flink 也不例外。类似的算子还有 median 和 tp95。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"为此,需要对这些算子做专门的优化,优化的思路就是尽量减少计算过程中使用的数据对象的大小,其中:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"median/tp90/tp95:参考了 hive percentile_approx 的近似算法,该算法通过 NumericHistogram(一种非等距直方图)记录数据分布,然后通过插值的方式得到相应的 tp 值(median 是 tp50)。","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"count-distinct:采用 RoaringBitmap 算法,通过压缩位图的方式标记输入样本,最终得到","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"精确","attrs":{}},{"type":"text","text":"的去重计数结果。","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"count-distinct(近似) :采用 HyperLoglog 算法,通过基数计数的方式,得到","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"近似","attrs":{}},{"type":"text","text":"的去重计数结果。该算法适用于大数据集的去重计数。","attrs":{}}]}]}],"attrs":{}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"后处理","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"后处理模块,是对第2阶段聚合计算输出数据进行再加工,主要有2个功能:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"复合指标计算","attrs":{}},{"type":"text","text":":对原始统计指标进行组合计算,得到新的组合指标。例如,要统计登录成功率,我们可以先分别统计出分母(登录次数)和分子(登录成功的次数),然后将分子除以分母,从而得到一个新的组合指标。配置项③就是用来配置组合指标的计算规则。","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"相对指标计算","attrs":{}},{"type":"text","text":":告警规则中经常要判断某个指标的相对变化情况(同比/环比)。我们利用 Flink 的state,能够方便的计算出同比/环比指标,配置项④就是用来配置相对指标规则。","attrs":{}}]}]}],"attrs":{}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"异常数据的处理","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"这里所说的异常数据,分为两类:迟到的数据和提前到的数据。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"迟到数据","attrs":{}},{"type":"text","text":":","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":1,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"对于严重迟到的数据(大于聚合窗口的 allowedLateness),通过 sideOutputLateData 进行收集,并通过 reporter 统计上报,从而能够在监控页面进行可视化监控。","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":1,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"对于轻微迟到的数据(小于聚合窗口的 allowedLateness),会触发窗口的重计算。如果每来一条迟到数据就触发一次第 1 阶段窗口的重计算,重计算结果传导到第 2 阶段聚合计算,就会导致部分数据的重复统计。为了解决重复统计的问题,我们在第 1 阶段聚合 Trigger 中进行了特殊处理:窗口触发采用 FIRE_AND_PURGE(计算并清理),及时清理已经参与过计算的数据。","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"提前到的数据","attrs":{}},{"type":"text","text":":这部分数据往往是数据上报端的时钟不准导致。在计算这些数据的 timestamp 时要人为干预,避免影响整个计算流的 watermark。","attrs":{}}]}]}],"attrs":{}},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"5 sink","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"聚合计算得到的指标,默认输出到 Kafka 和时序数据库 InfluxDB。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"kafka-sink:将指标标识(配置项①)作为 Kafka 的topic,将聚合结果发送出去,下游接收到该数据流后可以进一步处理加工,如告警事件的生产等。","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"InfluxDB-sink:将指标标识(配置项①)作为时序数据库的表名,将聚合结果持久化下来,用于 API 的数据查询、以及可视化报表展示等。","attrs":{}}]}]}],"attrs":{}},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"6 reporter","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"为了实时监控各个数据源和聚合指标的运行情况,我们通过 InfluxDB+Grafana 组合,实现了聚合计算全链路监控:如各环节的输入/输出 QPS、计算耗时、消费堆积、迟到数据量等。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/80/807315ddc91a3ae4469325e0c7c19db2.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"7 结语","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"目前,通过该通用聚合框架,承载了网易云信 100+ 个不同维度的指标计算,带来的收益也是比较可观的:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"提效:采用了页面配置化方式实现聚合指标的生产,开发周期从天级缩短到分钟级。没有数据开发经验的同学也能够自己动手完成指标的配置。","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"维护简单,资源利用率高:100+ 个指标只需维护 1 个 flink-job,资源消耗也从 300+ 个 CU 减少到 40CU。","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"运行过程透明:借助于全链路监控,哪个计算环节有瓶颈,哪个数据源有问题,一目了然。","attrs":{}}]}]}],"attrs":{}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"作者介绍","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"圣少友,网易云信数据平台资深开发工程师,从事数据平台相关工作,负责服务监控平台、数据应用平台、质量服务平台的设计开发工作。","attrs":{}}]}]}
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章