如何构建智能湖仓架构?亚马逊工程师的代码实践

{"type":"doc","content":[{"type":"heading","attrs":{"align":null,"level":1}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"数据仓库的数据体系严格、治理容易,业务规模越大,ROI 越高;数据湖的数据种类丰富,治理困难,业务规模越大,ROI 越低,但胜在灵活。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"现在,鱼和熊掌我都想要,应该怎么办?"},{"type":"link","attrs":{"href":"https:\/\/www.infoq.cn\/article\/rKSdSdeEQhgNJVk4sxsz","title":"xxx","type":null},"content":[{"type":"text","text":"湖仓一体架构"}]},{"type":"text","text":"就在这种情况下,快速在产业内普及。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"要构建湖仓一体架构并不容易,需要解决非常多的数据问题。比如,计算层、存储层、异构集群层都要打通,对元数据要进行统一的管理和治理。对于很多业内技术团队而言,已经是个比较大的挑战。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"可即便如此,在亚马逊云科技技术专家潘超看来,也未必最能贴合企业级大数据处理的最新理念。在 11 月 18 日晚上 20:00 的直播中,潘超详细分享了亚马逊云科技眼中的智能湖仓架构,以及以流式数据接入为主的最佳实践。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"现代化数据平台架构的关键指标"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"传统湖仓一体架构的不足之处是,着重解决点的问题,也就是“湖”和“仓”的打通,而忽视了面的问题:数据在整个数据平台的自由流转。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"潘超认为,"},{"type":"link","attrs":{"href":"https:\/\/www.infoq.cn\/article\/Jk5dZg81oMD5RU1izh0k","title":"xxx","type":null},"content":[{"type":"text","text":"现代数据平台架构"}]},{"type":"text","text":"应该具有几个关键特征:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"numberedlist","attrs":{"start":1,"normalizeStart":1},"content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":1,"align":null,"origin":null},"content":[{"type":"text","text":"以任何规模来存储数据;"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":2,"align":null,"origin":null},"content":[{"type":"text","text":"在整套架构涉及的所有产品体系中,获得最佳性价比;"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":3,"align":null,"origin":null},"content":[{"type":"text","text":"实现无缝的数据访问,实现数据的自由流动;"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":4,"align":null,"origin":null},"content":[{"type":"text","text":"实现数据的统一治理;"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":5,"align":null,"origin":null},"content":[{"type":"text","text":"用 AI\/ML 解决业务难题;"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/wechat\/images\/d5\/d5f6f46f7634a357ac61c46cc748a1af.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在构建企业级现代数据平台架构时,这五个关键特征,实质上覆盖了三方视角 ——"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"对于架构师而言,第一点和第二点值得引起注意。前者是迁移上云的一大核心诉求,后者是架构评审一定会过问的核心事项;"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"对于开发者而言,第三点和第四点尤为重要,对元数据的管理最重要实现的是数据在整个系统内的自由流动和访问,而不仅仅是打通数据湖和数据仓库;"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"对于产品经理而言,第五点点明了当下大数据平台的价值导向,即数据的收集和治理,应以解决业务问题为目标。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"为了方便理解,也方便通过 Demo 演示,潘超将这套架构体系,同等替换为了亚马逊云科技现有产品体系,包括:Amazon Athena、Amazon Aurora 、Amazon MSK、Amazon EMR 等,而流式数据入湖,重点涉及 Amazon MSK、Amazon EMR,以及另一个核心服务:Apache Hudi。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/wechat\/images\/37\/37c5f628d7f440ff2f7c1fde52a2041c.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"Amazon MSK 的扩展能力与最佳实践"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Amazon MSK 是亚马逊托管的高可用、强安全的 Kafka 服务,是数据分析领域,负责消息传递的基础,也因此在流式数据入湖部分举足轻重。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"之所以以 Amazon MSK 举例,而不是修改 Kafka 代码直接构建这套系统,是为了最大程度将开发者的注意力聚焦于流式应用本身,而不是管理和维护基础设施。况且,一旦你决定从头构建 PaaS 层基础设施,涉及到的工作就不仅仅是拉起一套 Kafka 集群了。一张图可以很形象地反映这个问题:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/wechat\/images\/5b\/5b921eea4fbc7b23e3de8da326ab204b.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"这张图从左至右,依次为不使用任何云服务的工作列表,使用 EC2 的工作列表,以及使用 MSK 的工作列表,工作量和 ROI 高下立现。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"而对于 MSK 来说,扩展能力是其重要特性。MSK 可以自动扩容,也可以手动 API 扩容。但如果对自己的“动手能力”没有充足的信心,建议选择自动扩容。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Amazon MSK 的自动扩容可以根据存储利用率来设定阈值,建议设定 50%-60%。自动扩容每次扩展 Max(10GB,10%* 集群存储空间),同时自动扩展每次有"},{"type":"codeinline","content":[{"type":"text","text":"6 个小时"}]},{"type":"text","text":"的冷却时间。一次如果一次需要扩容更大的容量,可以使用手动扩容。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"这种扩容既包括横向扩容 —— 通过 API 或者控制台向集群添加新的 Brokers,期间不会影响集群的可用性,也包括纵向扩容 —— 调整集群 Broker 节点的 EC2 实例类型。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"但无论是自动还是手动,是横向还是纵向,前提都是你已经做好了磁盘监控,可以使用 CloudWatch 云监控集成的监控服务,也可以在 MSK 里勾选其他的监控服务 (Prometheus),最终监控结果都能可视化显示。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"需要注意的是,MSK 集群增加 Broker,每个旧 Topic 的分区如果想重分配,需要手动执行。重分配的时候,会带来额外的带宽,有可能会影响业务,所以可以通过一些参数控制 Broker 间流量带宽,防止过程当中对业务造成太大的影响。当然像 Cruise 一样的开源工具,也可以多多用起来。Cruise 是做大规模集群的管理的 MSK 工具,它可以帮你做 Broker 间负载的 Re-balance 。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"关于 MSK 集群的高可用,有三点需要注意:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"numberedlist","attrs":{"start":1,"normalizeStart":1},"content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":1,"align":null,"origin":null},"content":[{"type":"text","text":"对于两 AZ 部署的集群,副本因子至少保证为 3。如果只有 1,那么当集群滚动升级的时候,就不能对外提供服务了;"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":2,"align":null,"origin":null},"content":[{"type":"text","text":"最小的 ISR(in-sync replicas)最多设置为 RF - 1,不然也会影响集群的滚动升级;"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":3,"align":null,"origin":null},"content":[{"type":"text","text":"当客户端连接 Broker 节点时,虽然配置一个 Broker 节点的连接地址就可以,但还是建议配置多个。MSK 故障节点自动替换以及在滚动升级的过程中,如果客户端只配备了一个 Broker 节点,可能会链接超时。如果配置了多个,还可以重试连接。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在 CPU 层面,CloudWatch 里有两个关于 MSK 的指标值得注意,一个是 CpuSystem,另一个是 CpuUser,推荐保持在 60% 以下,这样在 MSK 升级维护时,都有足够的 CPU 资源可用。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"如果 CPU 利用率过高,触发报警,则可以通过以下几种方式来扩展 MSK 集群:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"numberedlist","attrs":{"start":1,"normalizeStart":1},"content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":1,"align":null,"origin":null},"content":[{"type":"text","text":"垂直扩展,通过滚动升级进行替换。每个 Broker 的替换大概需要 10-15 分钟的时间。当然,是否替换集群内所有机器,要根据实际情况做选择,以免造成资源浪费;"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":2,"align":null,"origin":null},"content":[{"type":"text","text":"横向拓展,Topic 增加分区数;"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":3,"align":null,"origin":null},"content":[{"type":"text","text":"添加 Broker 到集群,之前创建的 Topic 进行 reassign Partitions,重分配会消耗集群资源,当然这是可控的。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"最后,关于 ACK 参数的设置也值得注意,ACK = 2 意味着在生产者发送消息后,等到所有副本都接收到消息,才返回成功。这虽然保证了消息的可靠性,但吞吐率最低。比如日志类数据,参考业务具体情况,就可以酌情设置 ACK = 1,容忍数据丢失的可能,但大幅提高了吞吐率。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"Amazon EMR 存算分离及资源动态扩缩"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Amazon EMR 是托管的 Hadoop 生态,常用的 Hadoop 组件在 EMR 上都会有,但是 EMR 核心特征有两点,一是存算分离,二是资源动态扩缩。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/wechat\/images\/b6\/b664de6e556ac1c4bc1d4dbc75e33657.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在大数据领域,存算分离概念的热度,不下于流批一体、湖仓一体。以亚马逊云科技产品栈为例,实现存算分离后,数据是在 S3 上存储,EMR 只是一个计算集群,是一个无状态的数据。而数据与元数据都在外部,集群简化为无状态的计算资源,用的时候打开,不用的时候关闭就可以。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"举个例子,凌晨 1 点到 5 点,大批 ETL 作业,开启集群。其他时间则完全不用开启集群。用时开启,不用关闭,对于上云企业而言,交服务费就像交电费,格外节省。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"而资源的动态扩缩主要是指根据不同的工作负载,动态扩充节点,按使用量计费。但如果数据是在 HDFS 上做存算分离与动态扩缩,就不太容易操作了,扩缩容如果附带 DataNote 数据,就会引发数据的 Re-balance,非常影响效率。如果单独扩展 NodeManager,在云下的场景,资源不再是弹性的,集群也一般是预制好的,与云上有本质区别。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"EMR 有三类节点,第一类是 Master 主节点,部署着 Resource Manager 等服务;Core 核心节点,有 DataNote,NodeManager, 依然可以选用 HDFS;第三类是任务节点,运行着 EMR 的 NodeManager 服务,是一个计算节点。所以,EMR 的扩缩,在于核心节点与任务节点的扩缩,可以根据 YARN 上 Application 的个数、CPU 的利用率等指标配置扩缩策略。也可以使用 EMR 提供 Managed Scaling 策略其内置了智能算法来实现自动扩缩,也是推荐的方式,对开发者而言是无感的。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/wechat\/images\/ff\/ff1b841ef286e5a36b2e577135010e71.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"EMR Flink Hudi 构建数据湖及 CDC 同步方案"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"那么应该如何利用 MSK 和 EMR 做数据湖的入湖呢?其详细架构图如下,分作六步详解:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/wechat\/images\/79\/792d952c04c9ef4023bdfde550ea0d4b.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"图中标号 1:日志数据和业务数据发送⾄MSK(Kafka),通过 Flink(TableAPI) 建立Kafka 表,消费 Kafka 数据,Hive Metastore 存储 Schema;"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"图中标号 2:RDS(MySQL) 中的数据通过 Flink CDC(flink-cdc-connector) 直接消费 Binlog 数据,⽆需搭建其他消费 Binlog 的服务 (⽐如 Canal,Debezium)。注意使⽤flink-cdc-connector 的 2.x 版本,⽀持parallel reading"},{"type":"codeinline","content":[{"type":"text","text":","}]},{"type":"text","text":" lock-free and checkpoint feature;"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"图中标号 3:使用Flink Hudi Connector, 将数据写⼊Hudi(S3) 表, 对于⽆需 Update 的数据使⽤Insert 模式写⼊,对于需要 Update 的 数据 (业务数据和 CDC 数据) 使用Upsert 模式写⼊;"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"图中标号 4:使用Presto 作为查询引擎,对外提供查询服务。此条数据链路的延迟取决于入Hudi 的延迟及 Presto 查询的延迟,总体在分钟级别;"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"图中标号 5:对于需要秒级别延迟的指标,直接在 Flink 引擎中做计算,计算结果输出到 RDS 或者 KV 数据库,对外提供 API 查询服务;"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"图中标号 6:使用QuickSight 做数据可视化,支持多种数据源接入。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"当然,在具体的实践过程中,仍需要开发者对数据湖方案有足够的了解,才能切合场景选择合适的调参配置。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"Q\/A 问答"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"1. 如何从 Apache Kafka 迁移至 Amazon MSK?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"MSK 托管的是 Apache Kafka,其 API 是完全兼容的,业务应用代码不需要调整,更换为 MSK 的链接地址即可。如果已有的 Kafka 集群数据要迁移到 MSK,可以使用 MirrorMaker2 做数据同步,然后切换应用链接地址即可。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"参考文档:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"link","attrs":{"href":"https:\/\/docs.aws.amazon.com\/msk\/latest\/developerguide\/migration.htmlhttps:\/\/d1.awsstatic.com\/whitepapers\/amazon-msk-migration-guide.pdf?did=wp_card&trk=wp_card","title":"","type":null},"content":[{"type":"text","text":"https:\/\/docs.aws.amazon.com\/msk\/latest\/developerguide\/migration.htmlhttps:\/\/d1.awsstatic.com\/whitepapers\/amazon-msk-migration-guide.pdf?did=wp_card&trk=wp_card"}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"2. MSK 支持 schema registry 吗?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"MSK 支持 Schema Registry, 不仅支持使用 AWS Glue 作为 Schema Registry, 还支持第三方的比如 confluent-schema-registry"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"3.MySQL cdc 到 hudi 的延迟如何?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"总体来讲是分钟级别延迟。和数据量,选择的 Hudi 表类型,计算资源都有关系。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"4. Amazon EMR 比标准 Apache Spark 快多少?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Amazon EMR 比标准 Apache Spark 快 3 倍以上。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Amazon EMR 在 Spark3.0 上比开源 Spark 快 1.7 倍,在 TPC-DS 3TB 数据的测试。"}]}]},{"type":"listitem","attrs":{"listStyle":"none"},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"参见:"}]}]},{"type":"listitem","attrs":{"listStyle":"none"},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"link","attrs":{"href":"https:\/\/aws.amazon.com\/cn\/blogs\/big-data\/run-apache-spark-3-0-workloads-1-7-times-faster-with-amazon-emr-runtime-for-apache-spark\/","title":"","type":null},"content":[{"type":"text","text":"https:\/\/aws.amazon.com\/cn\/blogs\/big-data\/run-apache-spark-3-0-workloads-1-7-times-faster-with-amazon-emr-runtime-for-apache-spark\/"}]}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Amazon EMR 在 Spark 2.x 上比开源 Spark 快 2~3 倍以上"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Amazon Presto 比开源的 PrestoDB 快 2.6 倍。"}]}]},{"type":"listitem","attrs":{"listStyle":"none"},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"参见:"}]}]},{"type":"listitem","attrs":{"listStyle":"none"},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"link","attrs":{"href":"https:\/\/aws.amazon.com\/cn\/blogs\/big-data\/amazon-emr-introduces-emr-runtime-for-prestodb-which-provides-a-2-6-times-speedup\/","title":"","type":null},"content":[{"type":"text","text":"https:\/\/aws.amazon.com\/cn\/blogs\/big-data\/amazon-emr-introduces-emr-runtime-for-prestodb-which-provides-a-2-6-times-speedup\/"}]}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"5. 智能湖仓和湖仓一体的区别是什么?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"这在本次分享中的现代化数据平台建设和 Amazon 的智能湖仓架构图中都有所体现,Amazon 的智能湖仓架构灵活扩展,安全可靠 ; 专门构建,极致性能 ; 数据融合,统一治理 ; 敏捷分析,深度智能 ; 拥抱开源,开发共赢。湖仓一体只是开始,智能湖仓才是终极。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"horizontalrule"},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"re:Invent 正在拉斯维加斯同步举办,今年是 re:Invent 的第十年,今天 Keynote 上 Adam 为我们提出的从未设想过的“数据之旅”,从数据湖扩展与安全出发,发布了 Row and cell-level security for Lake Formation,实现了海量数据的精细化管理。同时,顺应云计算时代无服务器化的技术发展趋势,同时发布了四款当前云上专门数据分析工具的无服务器版本。成为了行业内全栈式无服务器数据分析的先行者。大数据分析的易用性再次提升,体验在云上操作数据举重若轻的感觉。相关发布内容可以注册观看大会直播。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/wechat\/images\/f8\/f827de795aa283d5840540caf2631210.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"5 附录:操作代码实施"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"1. 创建 EMR 集群"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"makefile"},"content":[{"type":"text","text":"log_uri=\"s3:\/\/*****\/emr\/log\/\"\nkey_name=\"****\"\njdbc=\"jdbc:mysql:\\\/\\\/*****.ap-southeast-1.rds.amazonaws.com:3306\\\/hive_metadata_01?\ncreateDatabaseIfNotExist=true\"\ncluster_name=\"tech-talk-001\"\n\naws emr create-cluster \\\n--termination-protected \\\n--region ap-southeast-1 \\\n--applications Name=Hadoop Name=Hive Name=Flink Name=Tez Name=Spark\nName=JupyterEnterpriseGateway Name=Presto Name=HCatalog \\\n--scale-down-behavior TERMINATE_AT_TASK_COMPLETION \\\n--release-label emr-6.4.0 \\\n--ebs-root-volume-size 50 \\\n--service-role EMR_DefaultRole \\\n--enable-debugging \\\n--instance-groups InstanceGroupType=MASTER,InstanceCount=1,InstanceType=m5.xlarge\nInstanceGroupType=CORE,InstanceCount=2,InstanceType=m5.xlarge \\\n--managed-scaling-policy\nComputeLimits='{MinimumCapacityUnits=2,MaximumCapacityUnits=5,MaximumOnDemandCapacityUnits=2,Ma\nximumCoreCapacityUnits=2,UnitType=Instances}' \\\n--name \"${cluster_name}\" \\\n--log-uri \"${log_uri}\" \\\n--ec2-attributes '{\"KeyName\":\"'${key_name}'\",\"SubnetId\":\"subnet-\n0f79e4471cfa74ced\",\"InstanceProfile\":\"EMR_EC2_DefaultRole\"}' \\\n--configurations '[{\"Classification\": \"hive-site\",\"Properties\":\n{\"javax.jdo.option.ConnectionURL\": \"'${jdbc}'\",\"javax.jdo.option.ConnectionDriverName\":\n\"org.mariadb.jdbc.Driver\",\"javax.jdo.option.ConnectionUserName\":\n\"admin\",\"javax.jdo.option.ConnectionPassword\": \"xxxxxx\"}}]'\n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"2. 创建 MSK 集群"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"ruby"},"content":[{"type":"text","text":"# MSK集群创建可以通过CLI, 也可以通过Console创建\n# 下载kafka,创建topic写⼊数据\nwget https:\/\/dlcdn.apache.org\/kafka\/2.6.2\/kafka_2.12-2.6.2.tgz\n# msk zk地址,broker 地址\nzk_servers=*****.c3.kafka.ap-southeast-1.amazonaws.com:2181\nbootstrap_server=******.5ybaio.c3.kafka.ap-southeast-1.amazonaws.com:9092\ntopic=tech-talk-001\n# 创建tech-talk-001 topic\n.\/bin\/kafka-topics.sh --create --zookeeper ${zk_servers} --replication-factor 2 --partitions 4\n--topic ${topic}\n# 写⼊消息\n.\/bin\/kafka-console-producer.sh --bootstrap-server ${bootstrap_server} --topic ${topic}\n{\"id\":\"1\",\"name\":\"customer\"}\n{\"id\":\"2\",\"name\":\"aws\"}\n# 消费消息\n.\/bin\/kafka-console-consumer.sh --bootstrap-server ${bootstrap_server} --topic ${topic}\n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"3.EMR 启动 Flink"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"properties"},"content":[{"type":"text","text":"# 启动flink on yarn session cluster\n# 下载kafka connector\nsudo wget -P \/usr\/lib\/flink\/lib\/ https:\/\/repo1.maven.org\/maven2\/org\/apache\/flink\/flink-sql?\nconnector-kafka_2.12\/1.13.1\/flink-sql-connector-kafka_2.12-1.13.1.jar && sudo chown flink:flink\n\/usr\/lib\/flink\/lib\/flink-sql-connector-kafka_2.12-1.13.1.jar\n# hudi-flink-bundle 0.10.0\nsudo wget -P \/usr\/lib\/flink\/lib\/ https:\/\/dxs9dnjebzm6y.cloudfront.net\/tmp\/hudi-flink?\nbundle_2.12-0.10.0-SNAPSHOT.jar && sudo chown flink:flink \/usr\/lib\/flink\/lib\/hudi-flink?\nbundle_2.12-0.10.0-SNAPSHOT.jar\n# 下载 cdc connector\nsudo wget -P \/usr\/lib\/flink\/lib\/ https:\/\/repo1.maven.org\/maven2\/com\/ververica\/flink-sql?\nconnector-mysql-cdc\/2.0.0\/flink-sql-connector-mysql-cdc-2.0.0.jar && sudo chown flink:flink\n\/usr\/lib\/flink\/lib\/flink-sql-connector-mysql-cdc-2.0.0.jar\n# flink session\nflink-yarn-session -jm 1024 -tm 4096 -s 2 \\\n-D state.checkpoints.dir=s3:\/\/*****\/flink\/checkpoints \\\n-D state.backend=rocksdb \\\n-D state.checkpoint-storage=filesystem \\\n-D execution.checkpointing.interval=60000 \\\n-D state.checkpoints.num-retained=5 \\\n-D execution.checkpointing.mode=EXACTLY_ONCE \\\n-D execution.checkpointing.externalized-checkpoint-retention=RETAIN_ON_CANCELLATION \\\n-D state.backend.incremental=true \\\n-D execution.checkpointing.max-concurrent-checkpoints=1 \\\n-D rest.flamegraph.enabled=true \\\n-d\n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"4.Flink SQL 客户端"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"bash"},"content":[{"type":"text","text":"# 这是使⽤flink sql client写SQL提交作业\n# 启动client\n\/usr\/lib\/flink\/bin\/sql-client.sh -s application_*****\n# result-mode\nset sql-client.execution.result-mode=tableau;\n# set default parallesim\nset 'parallelism.default' = '1';\n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"5. 消费 Kafka 写⼊Hudi"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"sql"},"content":[{"type":"text","text":"# 创建kafka表\nCREATE TABLE kafka_tb_001 (\nid string,\nname string,\n`ts` TIMESTAMP(3) METADATA FROM 'timestamp'\n) WITH (\n'connector' = 'kafka',\n'topic' = 'tech-talk-001',\n'properties.bootstrap.servers' = '****:9092',\n'properties.group.id' = 'test-group-001',\n'scan.startup.mode' = 'latest-offset',\n'format' = 'json',\n'json.ignore-parse-errors' = 'true',\n'json.fail-on-missing-field' = 'false',\n'sink.parallelism' = '2'\n);\n# 创建flink hudi表\nCREATE TABLE flink_hudi_tb_106(\nuuid string,\nname string,\nts TIMESTAMP(3),\nlogday VARCHAR(255),\nhh VARCHAR(255)\n)PARTITIONED BY (`logday`,`hh`)\nWITH (\n'connector' = 'hudi',\n'path' = 's3:\/\/*****\/teck-talk\/flink_hudi_tb_106\/',\n'table.type' = 'COPY_ON_WRITE',\n'write.precombine.field' = 'ts',\n'write.operation' = 'upsert',\n'hoodie.datasource.write.recordkey.field' = 'uuid',\n'hive_sync.enable' = 'true',\n'hive_sync.metastore.uris' = 'thrift:\/\/******:9083',\n'hive_sync.table' = 'flink_hudi_tb_106',\n'hive_sync.mode' = 'HMS',\n'hive_sync.username' = 'hadoop',\n'hive_sync.partition_fields' = 'logday,hh',\n'hive_sync.partition_extractor_class' = 'org.apache.hudi.hive.MultiPartKeysValueExtractor'\n);\n# 插⼊数据\ninsert into flink_hudi_tb_106 select id as uuid,name,ts,DATE_FORMAT(CURRENT_TIMESTAMP, 'yyyy?\nMM-dd') as logday, DATE_FORMAT(CURRENT_TIMESTAMP, 'hh') as hh from kafka_tb_001;\n# 除了在创建表是指定同步数据的⽅式,也可以通过cli同步hudi表元数据到hive,但要注意分区格式\n.\/run_sync_tool.sh --jdbc-url jdbc:hive2:\\\/\\\/*****:10000 --user hadop --pass hadoop --\npartitioned-by logday --base-path s3:\/\/****\/ --database default --table *****\n# presto 查询数据\npresto-cli --server *****:8889 --catalog hive --schema default\n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"6.mysql cdc 同步到 hudi"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"sql"},"content":[{"type":"text","text":"# 创建mysql CDC表\nCREATE TABLE mysql_cdc_002 (\nid INT NOT NULL,\nname STRING,\ncreate_time TIMESTAMP(3),\nmodify_time TIMESTAMP(3),\nPRIMARY KEY(id) NOT ENFORCED\n) WITH (\n'connector' = 'mysql-cdc',\n'hostname' = '*******',\n'port' = '3306',\n'username' = 'admin',\n'password' = '*****',\n'database-name' = 'cdc_test_db',\n'table-name' = 'test_tb_01',\n'scan.startup.mode' = 'initial'\n);\n# 创建hudi表\nCREATE TABLE hudi_cdc_002 (\nid INT ,\nname STRING,\ncreate_time TIMESTAMP(3),\nmodify_time TIMESTAMP(3)\n) WITH (\n'connector' = 'hudi',\n'path' = 's3:\/\/******\/hudi_cdc_002\/',\n'table.type' = 'COPY_ON_WRITE',\n'write.precombine.field' = 'modify_time',\n'hoodie.datasource.write.recordkey.field' = 'id',\n'write.operation' = 'upsert',\n'write.tasks' = '2',\n'hive_sync.enable' = 'true',\n'hive_sync.metastore.uris' = 'thrift:\/\/*******:9083',\n'hive_sync.table' = 'hudi_cdc_002',\n'hive_sync.db' = 'default',\n'hive_sync.mode' = 'HMS',\n'hive_sync.username' = 'hadoop'\n);\n# 写⼊数据\ninsert into hudi_cdc_002 select * from mysql_cdc_002;\n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"7. sysbench"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"properties"},"content":[{"type":"text","text":"# sysbench 写⼊mysql数据\n# 下载sysbench\ncurl -s https:\/\/packagecloud.io\/install\/repositories\/akopytov\/sysbench\/script.rpm.sh | sudo\nbash\nsudo yum -y install sysbench\n# 注意当前使用的“lua”并未提供构建,请根据自身情况定义,上述⽤到表结构如下\nCREATE TABLE if not exists `test_tb_01` (\n`id` int NOT NULL AUTO_INCREMENT,\n`name` varchar(155) DEFAULT NULL,\n`create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,\n`modify_time` timestamp NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE\nCURRENT_TIMESTAMP,\nPRIMARY KEY (`id`)\n) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;\n# 创建表\nsysbench creates.lua --mysql-user=admin --mysql-password=admin123456 --mysql?\nhost=****.rds.amazonaws.com --mysql-db=cdc_test_db --report-interval=1 --events=1 run\n# 插⼊数据\nsysbench insert.lua --mysql-user=admin --mysql-password=admin123456 --mysql?\nhost=****.rds.amazonaws.com --mysql-db=cdc_test_db --report-interval=1 --events=500 --\ntime=0 --threads=1 --skip_trx=true run\n# 更新数据\nsysbench update.lua --mysql-user=admin --mysql-password=admin123456 --mysql?\nhost=****.rds.amazonaws.com --mysql-db=cdc_test_db --report-interval=1 --events=1000 --\ntime=0 --threads=10 --skip_trx=true --update_id_min=3 --update_id_max=500 run\n# 删除表\nsysbench drop.lua --mysql-user=admin --mysql-password=admin123456 --mysql?\nhost=****.rds.amazonaws.com --mysql-db=cdc_test_db --report-interval=1 --events=1 run\n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}}]}
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章