Flink集成Iceberg在同程艺龙的实践

{"type":"doc","content":[{"type":"blockquote","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"过去几年,数据仓库和数据湖方案在快速演进和弥补自身缺陷的同时,二者之间的边界也逐渐淡化。云原生的新一代数据架构不再遵循数据湖或数据仓库的单一经典架构,而是在一定程度上结合二者的优势重新构建。在云厂商和开源技术方案的共同推动之下,2021 年我们将会看到更多“湖仓一体”的实际落地案例。InfoQ希望通过选题的方式对数据湖和数仓融合架构在不同企业的落地情况、实践过程、改进优化方案等内容进行呈现。本文将分享同程艺龙将Flink与Iceberg深度集成的落地经验和思考。"}]}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"背景及痛点"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"业务背景"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"同程艺龙是一个提供机票、住宿、交通等服务的在线旅游服务平台,目前我所在的部门属于公司的研发部门,主要职责是为公司内其他业务部门提供一些基础服务,我们的大数据系统主要承接的业务是部门内的一些大数据相关的数据统计、分析工作等。数据来源有网关日志数据、服务器监控数据、K8s 容器的相关日志数据,App 的打点日志, MySQL 的 binlog 日志等。我们主要的大数据任务是基于上述日志构建实时报表,提供基于 Presto 的报表展示和即时查询服务,同时也会基于 Flink 开发一些实时、批处理任务,为业务方提供准确及时的数据支撑。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"原架构方案"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"由于我们所有的原始数据都是存储在 Kafka 的,所以原来的技术架构就是首先是 Flink 任务消费 Kafka 的数据,经过 Flink SQL 或者 Flink jar 的各种处理之后实时写入 Hive,其中绝大部分任务都是 Flink SQL 任务,因为我认为 SQL 开发相对代码要简单的多,并且维护方便、好理解,所以能用 SQL 写的都尽量用 SQL 来写。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"提交 Flink 的平台使用的是 Zeppelin,其中提交 Flink SQL 任务是 Zeppelin 自带的功能,提交 jar 包任务是我自己基于 Application 模式开发的 Zeppelin 插件。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"对于落地到 Hive 的数据,使用开源的报表系统 metabase (底层使用 Presto) 提供实时报表展示、定时发送邮件报表,以及自定义 SQL 查询服务。由于业务对数据的实时性要求比较高,希望数据能尽快的展示出来,所以我们很多的 Flink 流式任务的 checkpoint 设置为1分钟,数据格式采用的是 orc 格式。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"痛点"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"由于采用的是列式存储格式 ORC,无法像行式存储格式那样进行追加操作,所以不可避免的产生了一个大数据领域非常常见且非常棘手的问题,即 HDFS 小文件问题。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"开始的时候我们的小文件解决方案是自己写的一个小文件压缩工具,定期去合并,我们的 Hive 分区一般都是天级别的,所以这个工具的原理就是每天凌晨启动一个定时任务去压缩昨天的数据,首先把昨天的数据写入一个临时文件夹,压缩完,和原来的数据进行记录数的比对检验,数据条数一致之后,用压缩后的数据覆盖原来的数据,但是由于无法保证事务,所以出现了很多问题:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"压缩的同时由于延迟数据的到来导致昨天的 Hive 分区又有数据写入了,检验就会失败,导致合并小文件失败。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"替换旧数据的操作是没有事务保证的,如果替换的过程中旧分区有新的数据写入,就会覆盖新写入的数据,造成数据丢失。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"没有事务的支持,无法实时合并当前分区的数据,只能合并压缩前一个分区的,最新的分区数据仍然有小文件的问题,导致最新数据查询性能提高不了。"}]}]}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"Flink+Iceberg 的落地"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"Iceberg 技术调研"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"所以基于以上的 HDFS 小文件、查询慢等问题,结合我们的现状,我调研了目前市面上的数据湖技术:Delta、Apache Iceberg 和 Apache Hudi,考虑了目前数据湖框架支持的功能和以后的社区规划,最终我们是选择了 Iceberg,其中考虑的原因有以下几方面:"}]},{"type":"heading","attrs":{"align":null,"level":4},"content":[{"type":"text","text":"Iceberg 深度集成 Flink"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"前面讲到,我们的绝大部分任务都是 Flink 任务,包括批处理任务和流处理任务,目前这三个数据湖框架,Iceberg 是集成 Flink 做的最完善的,如果采用 Iceberg 替代 Hive 之后,迁移的成本非常小,对用户几乎是无感知的,"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"比如我们原来的SQL是这样的:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"sql"},"content":[{"type":"text","text":"INSERT INTO hive_catalog.db.hive_table SELECT * FROM kafka_table\n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"迁移到 Iceberg 以后,只需要修改 catalog 就行。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"sql"},"content":[{"type":"text","text":"INSERT INTO iceberg_catalog.db.iIcebergceberg_table SELECT * FROM kafka_table \n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Presto 查询也是和这个类似,只需要修改 catalog 就行了。"}]},{"type":"heading","attrs":{"align":null,"level":4},"content":[{"type":"text","text":"Iceberg 的设计架构使得查询更快"}]},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/43\/437587fd6f530d3e1a267b018e81a2f6.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在 Iceberg 的设计架构中,manifest 文件存储了分区相关信息、data files 的相关统计信息(max\/min)等,去查询一些大的分区的数据,就可以直接定位到所要的数据,而不是像 Hive 一样去 list 整个 HDFS 文件夹,时间复杂度从O(n)降到了O(1),使得一些大的查询速度有了明显的提升,在 Iceberg PMC Chair Ryan Blue 的演讲中,我们看到命中 filter 的任务执行时间从 61.5 小时降到了22分钟。"}]},{"type":"heading","attrs":{"align":null,"level":4},"content":[{"type":"text","text":"使用 Flink SQL 将 CDC 数据写入 Iceberg"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Flink CDC 提供了直接读取 MySQL binlog 的方式,相对以前需要使用 canal 读取 binlog 写入 Iceberg,然后再去消费 Iceberg 数据。少了两个组件的维护,链路减少了,节省了维护的成本和出错的概率。并且可以实现导入全量数据和增量数据的完美对接,所以使用 Flink SQL 将 MySQL binlog 数据导入 Iceberg 来做 MySQL->Iceberg 的导入将会是一件非常有意义的事情。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"此外对于我们最初的压缩小文件的需求,虽然 Iceberg 目前还无法实现自动压缩,但是它提供了一个批处理任务,已经能满足我们的需求。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"Hive 表迁移 Iceberg 表"}]},{"type":"heading","attrs":{"align":null,"level":4},"content":[{"type":"text","text":"迁移准备工作"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"目前我们的所有数据都是存储在 Hive 表的,在验证完 Iceberg 之后,我们决定将 Hive 的数据迁移到 Iceberg,所以我写了一个工具,可以使用 Hive 的数据,然后新建一个 Iceberg 表,为其建立相应的元数据,但是测试的时候发现,如果采用这种方式,需要把写入 Hive 的程序停止,因为如果 Iceberg 和 Hive 使用同一个数据文件,而压缩程序会不断地压缩 Iceberg 表的小文件,压缩完之后,不会马上删除旧数据,所以 Hive 表就会查到双份的数据,故我们采用双写的策略,原来写入 Hive 的程序不动,新启动一套程序写入 Iceberg,这样能对 Iceberg 表观察一段时间。还能和原来 Hive 中的数据进行比对,来验证程序的正确性。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"经过一段时间观察,每天将近几十亿条数据、压缩后几个T大小的 Hive 表和 Iceberg 表,一条数据也不差。所以在最终对比数据没有问题之后,把 Hive 表停止写入,使用新的 Iceberg 表。"}]},{"type":"heading","attrs":{"align":null,"level":4},"content":[{"type":"text","text":"迁移工具"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我将这个 Hive 表迁移 Iceberg 表的工具做成了一个基于 Flink batch job 的 Iceberg Action,提交了社区,不过目前还没合并:https:\/\/github.com\/apache\/iIcebergceberg\/pull\/2217。这个功能的思路是使用 Hive 原始的数据不动,然后新建一个 Iceberg table,再为这个新的 Iceberg table 生成对应的元数据,大家有需要的话可以先看看。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"此外,Iceberg 社区,还有一个把现有的数据迁移到已存在的 Iceberg table 的工具,类似 Hive 的"},{"type":"codeinline","content":[{"type":"text","text":"LOAD DATA INPATH ... INTO TABLE "}]},{"type":"text","text":",是用 Spark 的存储过程做的,大家也可以关注下:https:\/\/github.com\/apache\/iceberg\/pull\/2210"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"Iceberg 优化实践"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"压缩小文件"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"目前压缩小文件是采用的一个额外批任务来进行的,Iceberg 提供了一个 Spark 版本的 action,我在做功能测试的时候发现了一些问题,此外我对 Spark 也不是非常熟悉,担心出了问题不好排查,所以参照 Spark 版本的自己实现了一个 Flink 版本,并修复了一些 bug,进行了一些功能的优化。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"由于我们的 Iceberg 的元数据都是存储在 Hive 中的,也就是我们使用了 HiveCatalog,所以压缩程序的逻辑是把 Hive 中所有的 Iceberg 表全部都查出来,依次压缩。压缩没有过滤条件,不管是分区表还是非分区表,都进行全表的压缩,这样做是为了处理某些使用 eventtime 的 Flink 任务。如果有延迟的数据的到来,就会把数据写入以前的分区,如果不是全表压缩只压缩当天分区的话,新写入的其他天的数据就不会被压缩。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"之所以没有开启定时任务来压缩,是因为比如定时五分钟压缩一个表,如果五分钟之内这个压缩任务没完成,没有提交新的 snapshot,下一个定时任务又开启了,就会把上一个没有完成的压缩任务中的数据重新压缩一次,所以每个表依次压缩的策略可以保证某一时刻一个表只有一个任务在压缩。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"代码示例参考:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"text"},"content":[{"type":"text","text":"StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();\nActions.forTable(env, table)\n .rewriteDataFiles()\n \/\/.maxParallelism(parallelism)\n \/\/.filter(Expressions.equal(\"day\", day))\n \/\/.targetSizeInBytes(targetSizeInBytes)\n .execute();\n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"目前系统运行稳定,已经完成了几万次任务的压缩。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/d5\/d5bc19ac15158eed0cccfb72ba35fe18.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"blockquote","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"italic"}],"text":"注意:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"不过目前对于新发布的 Iceberg 0.11 来说,还有一个已知的 bug,即当压缩前的文件大小大于要压缩的大小(targetSizeInBytes)时,会造成数据丢失,其实这个问题我在最开始测试小文件压缩的时候就发现了,并且提了一个pr,我的策略是大于目标文件的数据文件不参与压缩,不过这个pr没有合并到0.11版本中,后来社区另外一个兄弟也发现了相同的问题,提交了一个pr( https:\/\/github.com\/apache\/iceberg\/pull\/2196 ) ,策略是将这个大文件拆分到目标文件大小,目前已经合并到 master,会在下一个 bug fix 版本 0.11.1 中发布。"}]}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"查询优化"}]},{"type":"heading","attrs":{"align":null,"level":4},"content":[{"type":"text","text":"批处理定时任务"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"目前对于定时调度中的批处理任务,Flink 的 SQL 客户端还没 Hive 那样做的很完善,比如执行 hive-f来执行一个文件。而且不同的任务需要不同的资源,并行度等。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"所以我自己封装了一个 Flink 程序,通过调用这个程序来进行处理,读取一个指定文件里面的 SQL,来提交批任务。在命令行控制任务的资源和并行度等。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"text"},"content":[{"type":"text","text":"\/home\/flink\/bin\/fFlinklinklink run -p 10 -m yarn-cluster \/home\/work\/iceberg-scheduler.jar my.sql\n"}]},{"type":"heading","attrs":{"align":null,"level":4},"content":[{"type":"text","text":"优化"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"批任务的查询这块,我做了一些优化工作,比如 limit 下推,filter 下推,查询并行度推断等,可以大大提高查询的速度,这些优化都已经推回给社区,并且在 Iceberg 0.11 版本中发布。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"运维管理"}]},{"type":"heading","attrs":{"align":null,"level":4},"content":[{"type":"text","text":"清理 orphan 文件"}]},{"type":"numberedlist","attrs":{"start":1,"normalizeStart":1},"content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":1,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"定时任务删除"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在使用 Iceberg 的过程中,有时候会有这样的情况,我提交了一个 Flink 任务,由于各种原因,把它停了,这个时候 Iceberg 还没提交相应的快照。此外由于一些异常导致程序失败,会产生一些不在 Iceberg 元数据里面的孤立的数据文件,这些文件对 Iceberg 来说是不可达的,也是没用的。所以我们需要像 jvm 的垃圾回收一样来清理这些文件。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"目前 Iceberg 提供了一个 Spark 版本的 action 来处理这些没用的文件,我们采取的策略和压缩小文件一样,获取 Hive 中的所有的 Iceberg 表。每隔一个小时执行一次定时任务来删除这些没用的文件。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"java"},"content":[{"type":"text","text":" SparkSession spark = ......\n Actions.forTable(spark, table)\n .removeOrphanFiles()\n \/\/.deleteWith(...)\n .execute();\n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"numberedlist","attrs":{"start":2,"normalizeStart":2},"content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":2,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"踩坑"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我们在程序运行过程中出现了正常的数据文件被删除的问题,经过调研,由于快照保留设置是一小时,这个清理程序清理时间也是设置一个小时,通过日志发现是这个清理程序删除了正常的数据。查了查代码,应该是设置了一样的时间,在清理孤立文件的时候,有其他程序正在读取要 expired 的 snapshot,导致删除了正常的数据。最后把这个清理程序的清理时间改成默认的三天,没有再出现删除数据文件的问题。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"当然,为了保险起见,我们可以覆盖原来的删除文件的方法,改成将文件到一个备份文件夹,检查没有问题之后,手工删除。"}]},{"type":"heading","attrs":{"align":null,"level":4},"content":[{"type":"text","text":"快照过期处理"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我们的快照过期策略,是和压缩小文件的批处理任务写在一起的,压缩完小文件之后,进行表的快照过期处理,目前保留的时间是一个小时。这是因为对于有一些比较大的表,分区比较多,而且 checkpoint 比较短,如果保留的快照过长的话,还是会保留过多小文件,我们暂时没有查询历史快照的需求,所以我将快照的保留时间设置了一个小时。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"java"},"content":[{"type":"text","text":"long olderThanTimestamp = System.currentTimeMillis() - TimeUnit.HOURS.toMillis(1);\ntable.expireSnapshots()\n\/\/ .retainLast(20)\n.expireOlderThan(olderThanTimestamp)\n.commit();\n"}]},{"type":"heading","attrs":{"align":null,"level":4},"content":[{"type":"text","text":"数据管理"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"写入了数据之后,当想查看相应的快照有多少数据文件时,直接查询 Spark 无法知道哪个是有用的,哪个是没用的。所以需要有对应的管理工具。目前 Flink 这块还不太成熟,我们可以使用 Spark3 提供的工具来查看。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"numberedlist","attrs":{"start":1,"normalizeStart":1},"content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":1,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"DDL"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"目前 create table 这些操作我们是通过 Flink SQL Client 来做的。其他相关的 DDL 的操作可以使用 Spark 来做:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"https:\/\/iceberg.apache.org\/spark\/#ddl-commands"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"numberedlist","attrs":{"start":2,"normalizeStart":2},"content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":2,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"DML"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"一些相关的数据的操作,比如删除数据等可以通过 MySQL 来实现,Presto 目前只支持分区级别的删除功能。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"numberedlist","attrs":{"start":3,"normalizeStart":3},"content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":3,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"show partitions & show create table"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在我们操作 Hive 的时候,有一些很常用的操作,比如 show partitions、 show create table 等,这些目前 Flink 还没有支持,所以在操作 Iceberg 的时候就很不方便,我们自己基于 Flink 1.12做 了修改,不过目前还没有完全提交到社区,后续有时间会提交到 Flink 和 Iceberg 社区。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"后续工作"}]},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Flink SQL 接入 CDC 数据到 Iceberg"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"目前在我们内部的版本中,我已经测试通过可以使用 Flink SQL 将 CDC 数据(比如 MySQL binlog)写入 Iceberg,社区的版本中实现该功能还需要做一些工作,我也提交了一些相关的 PR 来推进这个工作。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"使用 SQL 进行删除和更新"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"对于 copy-on-write 表,我们可以使用 Spark SQL 来进行行级的删除和更新。具体的支持的语法可以参考源码中的测试类:org.apache.iceberg.spark.extensions.TestDelete & org.apache.iceberg.spark.extensions.TestUpdate,这些功能我在测试环境测试是可以的,但是还没有来得及更新到生产。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"使用 Flink SQL 进行 streaming read"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在工作中会有一些这样的场景,由于数据比较大,Iceberg 的数据只存了较短的时间,如果很不幸因为程序写错了等原因,想从更早的时间来消费就无能为力了。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"当引入了 Iceberg 的 streaming read 之后,这些问题就可以解决了,因为 Iceberg 存储了所有的数据,当然这里有一个前提就是对于数据没有要求特别精确,比如达到秒级别,因为目前 Flink 写入 Iceberg 的事务提交是基于 Flink Checkpoint 间隔的。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"收益及总结"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"经过对 Iceberg 大概一个季度的调研,测试,优化和 bug 修复,我们将现有的 Hive 表都迁移到了 Iceberg,完美解决了原来的所有的痛点问题,目前系统稳定运行,而且相对 Hive 得到了很多的收益:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Flink 写入的资源减少"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"举一个例子,默认配置下,原来一个 Flink 读取 Iceberg 写入 Hive 的任务,需要60个并行度才不会让 Iceberg 产生积压。改成写入 Iceberg 之后,只需要20个并行度就够了."}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"查询速度变快"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"前面我们讲到 Iceberg 查询的时候不会像 Hive 一样去 list 整个文件夹来获取分区数据,而是先从 manifest 文件中获取相关数据,查询的性能得到了显著的提升,一些大的报表的查询速度从50秒提高到30秒。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"并发读写"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"由于 Iceberg 的事务支持,我们可以实现对一个表进行并发读写,Flink 流式数据实时入湖,压缩程序同时压缩小文件,清理过期文件和快照的程序同时清理无用的文件,这样就能更及时的提供数据,做到分钟级的延迟,查询最新分区数据的速度大大加快了,并且由于 Iceberg 的 ACID 特性可以保证数据的准确性。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"time travel"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"可以回溯查询以前某一时刻的数据。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"总结一下,我们目前可以实现使用 Flink SQL 对 Iceberg 进行批、流的读写,并可以对小文件进行实时的压缩,使用 Spark SQL 做一些 delete 和 update 工作以及一些 DDL 操作,后续可以使用 Flink SQL 将 CDC 的数据写入 Iceberg。目前对 Iceberg 的所有的优化和 bug fix,我已经贡献给社区。由于笔者水平有限,有时候也难免有错误,还请大家不吝赐教。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"作者介绍:"},{"type":"text","text":"张军,同程艺龙大数据开发工程师"}]}]}
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章