袋鼠云:基于Flink构建实时计算平台的总体架构和关键技术点

{"type":"doc","content":[{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"平台建设的背景","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"传统离线数据开发时效性较差,无法满足快速迭代的互联网需求。伴随着以 Flink 为代表的实时技术的飞速发展,实时计算被越来越多的企业使用,但是在使用中,各种问题也随之而来。比如开发者使用门槛高、产出的业务数据质量没有保障、企业缺少统一平台管理难以维护等。在诸多不利因素的影响下,我们决定利用现有的 Flink 技术构建一套完整的实时计算平台。","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"平台总体架构","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"从总体架构来看,实时计算平台大体可以分为三层:","attrs":{}}]},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"计算平台","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"调度平台","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"资源平台。","attrs":{}}]}]}],"attrs":{}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"每层承担着相应的功能,同时层与层之间又有交互,符合高内聚、低耦合的设计原子,架构图如下:","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/a8/a8fb17681f36300bea72b31d37d3d1f8.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"img","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"计算平台","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"直接面向开发人员使用,可以根据业务需求接入各种外部数据源,提供后续任务使用。数据源配置完成后,就可以在上面做基于 Flink 框架可视化的数据同步、SQL 化的数据计算的工作,并且可以对运行中的任务进行多维度的监控和告警。","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"调度平台","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"该层接收到平台传过来的任务内容和配置后,接下来就是比较核心的工作,也是下文中重点展开的内容。这里先做一个大体的介绍,根据任务类型的不同将使用不同的插件进行解析。","attrs":{}}]},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"数据同步任务:接收到上层传过来的 json 后,进入到 FlinkX 框架中,根据数据源端和写出目标端的不同生成对应的 DataStream,最后转换成 JobGraph。","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"数据计算任务:接收到上层传过来的 SQL 后,进入到 FlinkStreamSQL 框架中,解析 SQL、注册成表、生成 transformation,最后转换成 JobGraph。","attrs":{}}]}]}],"attrs":{}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"调度平台将得到的 JobGraph 提交到对应的资源平台,完成任务的提交。","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"资源平台","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"目前可以对接多套不同的资源集群,并且也可以对接不同的资源类型,如:yarn 和 k8s.","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"数据同步和数据计算","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在调度平台中,接收到用户的任务后就开始了后面的一系列的转换操作,最终让任务运行起来。我们从底层的技术细节来看如何基于 Flink 构建实时计算平台,以及如何使用 FlinkX、FlinkStreamSQL 做一站式开发。","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"FlinkX","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"作为数据处理的第一步,也是最基础的一步,我们来看看 FlinkX 是如何在 Flink 的基础上做二次开发。用户只需要关注同步任务的 json 脚本和一些配置,无需关心调用 Flink 的细节,且 FlinkX 支持下图中所展示的功能。","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/f9/f9b8e4aa8bed3314adae658b82b80b59.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"img","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我们先看下 Flink 任务提交中涉及到的流程,其中的交互流程图如下:","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/69/69513e7790c88efcd8914a7259d70155.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"img","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"那么 FlinkX 又是如何在 Flink 的基础对上述组件进行封装和调用,使得 Flink 作为数据同步工具使用更加简单?","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"主要从 Client、JobManager、TaskManager 三个部分进行扩展,涉及到的内容如下图:","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/c5/c553bd861d3024e016e4e63baea25cdc.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"img","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":4},"content":[{"type":"text","text":"Client 端","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"FlinkX 对原生的 Client 做了部分定制化开发,在 ","attrs":{}},{"type":"link","attrs":{"href":"https://links.jianshu.com/go?to=http%3A%2F%2Fgitlab.prod.dtstack.cn%2Fdt-insight-engine%2Fflinkx%2Ftree%2Fmaster%2Fflinkx-launcher","title":null,"type":null},"content":[{"type":"text","text":"FlinkX-launcher","attrs":{}}]},{"type":"text","text":" 模块下,主要有以下几个步骤:","attrs":{}}]},{"type":"numberedlist","attrs":{"start":null,"normalizeStart":1},"content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":1,"align":null,"origin":null},"content":[{"type":"text","text":"解析参数,如:并行度、savepoint 路径、程序的入口 jar 包(平常写的 Flink demo)、Flink-conf.yml 中的配置等;","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":2,"align":null,"origin":null},"content":[{"type":"text","text":"通过程序的入口 jar 包、外部传入参数、savepoint 参数生成 PackagedProgram;","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":3,"align":null,"origin":null},"content":[{"type":"text","text":"通过反射调用 PackagedProgram 中指定的程序的入口 jar 包的 main 方法,在 main 方法中,通过用户配置的 reader 和 writer 的不同,加载对应的插件;","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":4,"align":null,"origin":null},"content":[{"type":"text","text":"生成 JobGraph,将其中需要的资源 (Flink 需要的 jar 包、reader 和 writer 的 jar 包、Flink 配置文件等) 加入到 YarnClusterDescriptor 的 shipFiles 中,最后 YarnClusterDescriptor 就可以和 yarn 交互启动 JobManager;","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":5,"align":null,"origin":null},"content":[{"type":"text","text":"任务提交成功后,Client 端就可得到 yarn 返回的 applicationId,后续既可以通过 application 跟踪任务的状态。","attrs":{}}]}]}]},{"type":"heading","attrs":{"align":null,"level":4},"content":[{"type":"text","text":"JobManager 端","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Client 端提交完后,随后 yarn 启动 jobmanager,jobmanager 会启动一些自己的内部服务,并且会构建 ExecutionGraph。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在这个过程中,FlinkX 主要做了以下两件事:","attrs":{}}]},{"type":"numberedlist","attrs":{"start":null,"normalizeStart":1},"content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":1,"align":null,"origin":null},"content":[{"type":"text","text":"用不同插件重写 InputFormat 接口中的 createInputSplits 的方法创建分片,在上游数据量较大或者需要多并行度读取的时候,该方法就起到给每个并行度设置不同的分片的作用。比如:在两个并行度读取 MySQL 时,通过配置的分片字段 (比如自增主键 ID)。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":2,"align":null,"origin":null},"content":[{"type":"text","text":"第一个并行度读取 SQL 为:select * from table where id mod 2=0;","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":3,"align":null,"origin":null},"content":[{"type":"text","text":"第二个并行度读取 SQL 为:select * from table where id mod 2=1;","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":4,"align":null,"origin":null},"content":[{"type":"text","text":"分片创建完后,通过 getInputSplitAssigner 按顺序返回分配给各个并发实例。","attrs":{}}]}]}]},{"type":"heading","attrs":{"align":null,"level":4},"content":[{"type":"text","text":"TaskManager 端","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在 TaskManager 端接收到 JobManager 调度过来的 task 之后,就开始了自己的生命周期的调用,主要包含以下几个重要的阶段:","attrs":{}}]},{"type":"numberedlist","attrs":{"start":null,"normalizeStart":1},"content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":1,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"initialize-operator-states():","attrs":{}},{"type":"text","text":"循环遍历该 task 所有的 operator,并调用实现了 CheckpointedFunction 接口的 initializeState 方法,在 FlinkX 中为 DtInputFormatSourceFunction 和 DtOutputFormatSinkFunction,该方法在任务第一次启动的时候会被调用,作用是恢复状态,当任务失败时可以从最近一次的 checkpoint 恢复读取位置,从而达到可以续跑的目的,如下图所示:","attrs":{}}]}]}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/50/50f1cc739f1651f11f706532c75bc4ce.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"img","attrs":{}}]},{"type":"numberedlist","attrs":{"start":"2","normalizeStart":"2"},"content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":2,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"open-operators():","attrs":{}},{"type":"text","text":"该方法调用 OperatorChain 中所有 StreamOperator 的 open 方法,最后调用的是 BaseRichInputFormat 中的 open 方法。该方法主要做以下几件事:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":3,"align":null,"origin":null},"content":[{"type":"text","text":"初始化累加器,记录读入、写出的条数、字节数;","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":4,"align":null,"origin":null},"content":[{"type":"text","text":"初始化自定义的 Metric;","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":5,"align":null,"origin":null},"content":[{"type":"text","text":"开启限速器;","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":6,"align":null,"origin":null},"content":[{"type":"text","text":"初始化状态;","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":7,"align":null,"origin":null},"content":[{"type":"text","text":"打开读取数据源的连接 (根据数据源的不同,每个插件各自实现)。","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":8,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"run():","attrs":{}},{"type":"text","text":"调用 InputFormat 中的 nextRecord 方法、OutputFormat 中的 writeRecord 方法进行数据的处理。","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":9,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"close-operators():","attrs":{}},{"type":"text","text":"做一些关闭操作,例如调用 InputFormat、OutputFormat 的 close 方法等,并做一些清理工作。","attrs":{}}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"以上就是TaskManager 中 StreamTask 整体的生命流程,除了上面介绍的 FlinkX 如何调用 Flink 接口,FlinkX 还有如下一些特性。","attrs":{}}]},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"自定义累加器:","attrs":{}},{"type":"text","text":"累加器是从用户函数和操作中,分布式地统计或者聚合信息。每个并行实例创建并更新自己的 Accumulator 对象, 然后合并收集不同并行实例,在作业结束时由系统合并,并可将结果推动到普罗米修斯中,如图:","attrs":{}}]}]}],"attrs":{}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/a4/a41c3f733f65af07cfbdb19d0c93bbab.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"img","attrs":{}}]},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"支持离线和实时同步:","attrs":{}},{"type":"text","text":"我们知道 FlinkX 是一个支持离线和实时同步的框架,这里以 MySQL 数据源为例,看看是如何实现的。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"离线任务:在 DtInputFormatSourceFunction 的 run 方法中会调用 InputFormat 的 open 方法,读取数据记录到 resultSet 中,之后再调用 reachedEnd 方法,来判断 resultSet 的数据是否读取完。如果读取完,就走后续的 close 流程。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"实时任务:open 方法和离线一致,在 reachedEnd 时判断是否是轮询任务,如果是,则会进入到间隔轮询的分支中,将上一次轮询读取到的最大的一个增量字段值,作为本次轮询的开始位置,并进行下一次轮询,轮询流程图如下:","attrs":{}}]}]}],"attrs":{}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/c8/c8e6eec917db8d341ad710647ecdae1a.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"img","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"脏数据管理和错误控制:","attrs":{}},{"type":"text","text":"把写入数据源时出错的数据记录下来,并把错误原因分类,然后写入配置的脏数据表。错误原因目前有:类型转换错误、空指针、主键冲突和其它错误四类。错误控制是基于 Flink 的累加器,在运行过程中记录出错的记录数,然后在单独的线程里定时判断错误的记录数是否已经超出配置的最大值,如果超出,则抛出异常使任务失败。这样可以对数据精确度要求不同的任务,做不同的错误控制,控制流程图如下:","attrs":{}}]}]}],"attrs":{}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/57/5745c891644a85225a593e1d18ecbf77.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"img","attrs":{}}]},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"限速器:","attrs":{}},{"type":"text","text":"一些上游数据产生过快的任务,会对下游数据库造成较大的压力,故而需要在源端做一些速率控制,FlinkX 使用的是令牌桶限流的方式控制速率。如下图,当源端产生数据的速率达到某个阈值时,就不会再读取新的数据,在 BaseRichInputFormat的open 阶段也初始化了限速器。","attrs":{}}]}]}],"attrs":{}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/03/03e6ff3252ca3cd3496d5e60c3abf568.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"img","attrs":{}}]}]}],"attrs":{}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"以上就是 FlinkX 数据同步的基本原理,但是数据业务场景中数据同步只是第一步,由于 FlinkX 目前的版本中只有 ETL 中的 EL,并不具备对数据的转换和计算的能力,故而需要将产生的数据流入到下游的 FlinkStreamSQL。","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"FlinkStreamSQL","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"基于 Flink,对其实时 SQL 进行扩展,主要扩展了流与维表的 join,并支持原生 Flink SQL 所有的语法。目前 FlinkStreamSQL source 端只能对接 Kafka,所以默认上游数据来源都是 Kafka。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"接下来我们看看 FlinkStreamSQL 如何在 Flink 基础上做到,用户只需要关注业务 SQL 代码,如何调用 Flink api 来屏蔽底层。整体流程和上面介绍的 FlinkX 基本类似,不同点在 Client 端,这里主要包括 SQL 解析、注册表、执行 SQL 三个部分。","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/40/40e0e1ffbfb609bce56f8518aee4e24d.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"img","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":4},"content":[{"type":"text","text":"解析 SQL","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"这里主要是解析用户写的 create function、create table、create view、insert into 四种 SQL 语句,封装到结构化的 SQLTree 数据结构中。SQLTree 中包含了自定义函数集合、外部数据源表集合、视图语句集合、写数据语句集合。","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":4},"content":[{"type":"text","text":"表注册","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"得到了上面解析的 SQLTree 之后,就可以将 SQL中create table 语句对应的外部数据源集合作为表注册到 tableEnv 中,并且将用户自定的 UDF 注册进 tableEnv 中。","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":4},"content":[{"type":"text","text":"执行 SQL","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"将数据源注册成表之后,就可以执行后面 insert into 的 SQL 语句了,执行 SQL 这里会分两种情况:","attrs":{}}]},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"SQL 中没有关联维表,就直接执行 SQL;","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"SQL 中关联了维表,由于在 Flink 早期版本中不支持维表 join 语法,我们在这块做了扩展,不过在 FlinkStreamSQL v1.11 之后和社区保持了一致,支持了和维表 join 的语法。根据维表的类型不同,使用不同的关联方式:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"全量维表:将上游数据作为输入,使用 RichFlatMapFunction 作为查询算子,初始化时将数据全表捞到内存中,然后和输入数据组拼得到打宽后的数据,之后重新注册一张大表,供后续 SQL 使用。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"异步维表:将上游数据作为输入,使用 RichAsyncFunction 作为查询算子,并将查询得到的数据使用 LRU 缓存,然后和输入数据组拼得到打宽后的数据,之后重新注册一张大表,供后续SQL使用。","attrs":{}}]}]}],"attrs":{}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"上面介绍的就是 FlinkX 和 FlinkStramSQL 在 Client 端的不同之处,由于 source 端只有 Kafka 且使用了社区原生的 Kafka-connector,所以在 jobmanager 端也没有数据分片的逻辑,taskmanager 逻辑和 FlinkX 基本类似,这里不再介绍。","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"任务运维","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"当使用 FlinkX 和 FlinkStreamSQL 开发完业务之后,接下来进入到了任务运维阶段。在运维阶段,我们主要在任务运行信息、数据进出指标 metrics、数据延迟、反压、数据倾斜等维度做了监控。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"任务运行信息","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我们知道 FlinkStreamSQL 是基于 FlinkSQL 封装的,所以在提交任务运行时最终还是走的 FlinkSQL 的解析、验证、逻辑计划、逻辑计划优化、物理计划,最后将任务运行起来,也就得到了我们经常看见的 DAG 图:","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/6b/6b2fcda230652fc3a4cf58f06a339db5.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"img","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"但是由于 FlinkSQL 对任务做了很多优化,以至于我们只能看到如上图的大体 DAG 图,子 DAG 图里面的一些细节我们是没法直观的看到发生了什么事情。所以我们在原来生成 DAG 图的方式上进行了一定的改造,这样就能直观的看到子 DAG 图中每个 Operator 和每个并行度里面发生了什么事情,有了详细的 DAG 图后,其他的一些监控维度就能直观的展示,比如:数据输入输出、延时、反压、数据倾斜,在出现问题时就能具体定位到,如下图的反压:","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/33/335d156cb3a3a272dd1d919356bef690.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"img","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"了解了上面的结构后,我们来看看它是如何实现的。我们知道在 Client 提交任务时,会生成 JobGraph,JobGraph 中的 taskVertices 集合就封装了上图完整的信息,我们将 taskVertices 生成 json 后,再结合 LatencyMarker 和相关的 metrics,即可在前端生成上图,并做相应的告警。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"除了上面的 DAG 以外,还有自定义 metrics、数据延时获取等,这里不具体介绍,有兴趣的同学可以参考 FlinkStreamSQL 项目。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"使用案例:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"通过上面的介绍后,我们看下在平台上使用的实际案例。下面展示了一个完整的案例:使用 FlinkX 将 MySQL 中新增用户数据实时同步到 Kafka,然后使用 FlinkstreamSQL 消费 Kafka 实时计算每分钟新增用户数,产出结果落库到下游 MySQL,供业务使用。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"实时同步 MySQL 新增数据","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/20/20a142a6b7fe400ef568cee33ba96b57.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"img","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"实时计算每分钟新增用户数","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/5d/5d1d53c17b1e3814b8a43500e8497f3f.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"img","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"运行信息","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"整体 DAG,可以直观的显示上面提到的多项指标","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/b2/b26e5bdf5f89fecbf73327156795b570.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"img","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"解析后的详细 DAG 图,可以看到子 DAG 内部的多项指标","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/d6/d6a92cab1ce9e4bc017865f3eb15fa9b.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"img","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/ec/ecd656cb65a69616f6ae5b3078435061.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"img","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"以上就是 Flink 在袋鼠云实时计算平台的总体架构和一些关键的技术点,如有不足之处欢迎大家指出。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"0人点赞","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"link","attrs":{"href":"https://www.jianshu.com/nb/37017347","title":null,"type":null},"content":[{"type":"text","text":"Apache Flink","attrs":{}}]}]}]}
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章