唯品会翻牌ClickHouse后,实现百亿级数据自助分析

{"type":"doc","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"大家好,我是唯品会实施平台OLAP团队负责人王玉,负责唯品会、Presto、ClickHouse、Kylin、Kudu等OLAP之间的开源修改,组建优化和维护,业务使用范围支持和指引等工作。本次我要分享的主题是唯品会基于ClickHouse的百亿级数据自助分析实践,主要分为以下4个部分:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"唯品会OLAP的演进 "}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"实验平台基于Flink和ClickHouse如何实现数据自助分析"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"使用ClickHouse中遇到的问题和挑战"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"ClickHouse未来在唯品会的展望"}]}]}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"一、唯品会OLAP的演进"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"1、OLAP在唯品会的演进"}]},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/85\/85d24680dfdc49463d861062955782fe.jpeg","alt":"图片","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"1)Presto"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Presto作为当前唯品会OLAP的主力军,经历了数次架构和使用方式的进化。目前,具有500多台高配的物理机,CPU和内存都非常强悍,服务于20多个线上的业务。日均查询高峰可高达到500万次,每天读取和处理接近3PB的数据,查询的数据量也是几千亿级。另外,我们在的Presto的使用和改进上也做了一些工作,下面给大家详细地介绍一下。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"2)Kylin"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"上图中中间的符号就是Kylin,Kylin是国人开源的Apache项目,Kylin作为Presto的补充,在一些数据量级不适用于Presto实时查询原来的表的情况下,数据量又非常的大,例如需要查询几个月的打点流量数据,这样的数据有时候就可以用固定的规则预计算,这样就可以用Kylin作为补充。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"3)ClickHouse"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"目前有两个集群,每个集群大概是20至30台高配物理机,服务于实验平台A\/B-test,还有 OLAP的日志查询,打点监控等项目。ClickHouse是未来我们发展的关键。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Kylin的数据量比Presto要小很多,它主要是打点几个特定的查询,比如有供应规则的,类似于以前分析人们消费的品质或者出固定的报表才会用,它的数据量比Presto要小很多。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"2、唯品会OLAP部署架构模式"}]},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/89\/8964d0e1209130de5b7677acbb2e2011.jpeg","alt":"图片","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"下面这一部分是数据层,有Hadoop、Spark,例如ClickHouse从Hive导入数据到ClickHouse就是通过Waterdrop,老版本使用Waterdrop是用Spark去实现,而新版本的Waterdrop可以使用Flink实现,用户可以通过不同的配置中心去配置。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"另外还有Kafka和Flink,实时数仓数据写入ClickHouse或者Hbase是用Kafka加Flink来写,根据具体的业务需求和架构而定,在引擎层上面有一层代理层,ClickHouse我们是用chproxy,这也是一个开源项目,而且它的功能比较完善,像有一些用户权限在chproxy里面都可以去设置。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我们自己在chproxy之上,还搭了一个HAproxy,主要是做一层HA层,以防chproxy有时候可能会出现问题。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"用户是直接连HAproxy的地址和端口,这样我们会去做HA分发,保障高可用,在Presto代理层都使用了我们自主研发的一个OLAP跨越的服务工具——Nebula,老版本是叫Spider,就是各个对外提供的数据服务和各种业务,平行于右边的整套链路,向左侧对应的就是这部分的各个监控,有的是用Prometheus,有的是用Presto,Presto会自己有API和JDBC来提供这些接口,我们会将这些接口采集过来,在 Open faculty里面进行监控和告警。这就是整个OLAP的部署架构模式。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"3、Presto的业务架构图"}]},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/d6\/d6f13f0e56909f2dfbbe477f0f3175e7.jpeg","alt":"图片","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"底层利用了 Presto多数据源的Connector特性,如上图底层数据有Hive、Kudu、MySQL、Alluxio。Presto它是可以配置Catelog的,可以把数据源放在一个SQL里面做JOIN,是非常方便的。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"中间层包括有多个Presto集群,最初是根据不同的业务来独立不同的Presto集群。针对Presto集群,可以部署多个Spider,诊断分析记录所有的查询,因为Presto本身的历史查询是不持久的,放在内存里面也会有过期时间。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"最上层就是用Presto服务的业务应用,最初只有两个魔方和自主分析两个业务应用,发展到现在进入到20个业务应用。因为Presto在使用上确实十分方便,用户只需要写SQL,而且SQL的语法跟Hive也比较相似,也有很多UDF,并且接入也很方便,可以直接接client或者JDBC。所以在Presto的使用上还是非常频繁的。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"4、对Presto的改进"}]},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/58\/58dcad819e679379a507d86b8f4c72ea.jpeg","alt":"图片","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"第一个是开发了一套Presto的管控工具,修改了Presto server和kinda的源码,用自营的管理工具Spide\/Nebula从Presto所暴露的API和系统表里获取到节点和查询信息,一方面将查询落入MySQL,通过etl-job落入Hive便于存储和分析,包括以后也可以做一些报表,考虑从哪些方面进行优化。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"一方面根据集群查询数和节点信息来给集群打分。就是看Presto当前每个集群上面大概跑了多少个查询,查询量大不大?对Presto的集群负载情况如何?打了分以后,每次从client这边发起的查询,就会去选取分数相对较低的,也就是资源占用度比较低的集群,把SQL打进去,这样子就能实现负载均衡的效果。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"很多熟悉Presto的同学都知道Presto clinter的 HA是需要自己做的,因为官方的版本只提供了单节点的clinter,我们通过打分和负载均衡的情况去做了HA,这样子就保障了Presto的 HA,不管是在部署、升级,还是在评集群,这些时候用户都是无感。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"5、Presto容器化"}]},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/ce\/ce0b3d31c89109a4063b181a918171fd.jpeg","alt":"图片","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Presto上云接入K8S,可以智能扩缩容Presto集群,做到资源合理调度、智能部署等功能,运维起来就非常方便。我们只需要给Presto加一些容器就可以智能的调用它的资源情况。再配合上文提到的准入的工具,就形成了整个链路的智能路由效果。我们也通过这一套上云和广告工具,实现了和一线团队Spark资源集群借调。白天把资源返还给我们去做PrestoADHOC,夜间我们会把资源释通过k8s释放出来,给spark去跑离线叫法,这样就实现了资源的高效利用。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"6、ClickHouse的引入"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"随着业务对于OLAP的要求越来越高,部分业务场景Presto和Kylin无法满足现在的需求,比如有百亿JOIN百亿(Local join)的低延迟实时数据场景,和对中等的QPS的查询平均响应时间要求低于1秒的OLAP使用场景等,所以我们现在把目光转向更快的ClickHouse。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"下图我们对比了ClickHouse和已经使用的Presto和Kylin。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/a1\/a1aa11f66e7ee6009a8f9c801bf2513f.jpeg","alt":"图片","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"ClickHouse的优势如下:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"数据存储方面,ClickHouse的原数据一部分会储存在CK里面,路径和其他东西会储存在ZK里面。本地也有文件目录储存原数据表、本地表、分布式表和线表文等信息。这些都在本地的文件目录里面,感兴趣的同学可以去看一下ClickHouse的整个的目录构成。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"数据按照设置的策略存储在本地的目录上,在查询上的特定场景中,甚至可以比Presto快10倍。但是即使用了更新的表引擎还是存在很多限制和问题。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我们发现它的场景对于我们来说,比较实用的场景就是有百亿级数据量的JOIN,和一些比较复杂的数学数字的查询,还有人群计算等,比如说涉及到一些Bit map,后面将会详细的介绍这方面的内容。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"7、ClickHouse针对于Presto等传统的OLAP引擎的优势"}]},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/41\/417c7219a482fd989cfb3384af39ee17.jpeg","alt":"图片","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"大宽表查询性能优异,它主要的分析都是大宽表的SQL聚合,ClickHouse整个聚合耗时都非常小,并且具有量级的提升。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"单表性能分析以及分区对其的join计算都能取得很好的性能优势。比如百亿级数据量级join几十亿级数据量级的大表关联大表的场景,在24C*128G*10shard(2副本)通过优化取得了10秒左右的查询性能。如果数据量级更小一点的话,基本上都能维持在1到2秒内,就是用比Presto更少的物理资源,实现更快的查询。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"ClickHouse带来了很多比较高效的数据算法,比如各种估算,各种map的计算和Bit map与或非的预算。在很多场景下,这些都值得去深挖。后面我们会简单介绍一下,我们现在掌握的一些Bit map的场景。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"二、实验平台基于Flink和ClickHouse如何实现数据自助分析"}]},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/b4\/b4a395da68214ccd388abd7b7a4513a7.jpeg","alt":"图片","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"如今很多技术也都在围绕A\/B-test展开,所以我们最近也在关注这些这样的应用场景。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"1、实验平台OLAP业务场景"}]},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/b5\/b5522d4920b41dfb8b70c4cb1c353a03.jpeg","alt":"图片","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"上图是实验平台一个典型的从最左侧的 A\/B-test的日志到下段再到曝光、点击、加入购物车、收藏,最后生成订单。这明显的是一个从上到下的漏斗下降的过程。我们实现了Flink SQL Redis connector,支持Redis的sink、Source维表关联内可配置cache,极大地提高应用的TPS。通过Flink SQL实现实时数据流的pipeline,最终将大宽表sink到CK里,并按照某个字段粒度做murmurhash的存储。保证相同用户的数据都存在同一个副本的节点组内。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我们自己在Flink的ClickHouse的connector这一端也做了一些代码的修改,支持写入本地表的一个相关功能。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"2、Flink写入ClickHouse本地表的流程"}]},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/81\/817da985a4a7b0243f3c2e72764cbff2.jpeg","alt":"图片","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"第一步是根据库名和表明查ClickHouse的原数据表, SQL表示system.tables,这是ClickHouse自带的系统表。获取要写入的表的engine信息。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"第二步是解析engine信息,获取这个表所存储的一些集群名,获取本地表的表名等这些信息。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"第三步是根据集群名和查询数据的表,通过system.clusters也就是ClickHouse的系统表,获取集群的分片节点信息。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"最后根据下的配置的信息,初始化生成随机的shard group里的URL来连接。这样子Flink的ClickHouse就会通过 URL来连接机器,并根据设置好的进度时间触发flush,将一批数据真正的sink到ClickHouse的server端。(这里也涉及到一些参数优化)"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"以上就是我们如何经过修改,完整地把数据从Kafka通过Flink写入ClickHouse的一个流程。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Flink数据写入的时序图可以参考下图:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/85\/8599875ee50561475568e706462f2059.jpeg","alt":"图片","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"3、ClickHouse百亿级数据JOIN的解决方案"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在实际应用场景中我们发现了一些特定的场景,我们需要拿一天的用户流量点击情况,来JOIN A\/B-test的日志,用以匹配实验和人群的关系,这就给我们带来了很大的挑战,因为需求是提到Presto这端的,在这端发现百亿JOIN百亿,直接内存爆表,直接报出out of limit memory这种错误,无法查出。两张大分布式表JOIN出来的性能也非常不理想,我们把它缩到几十亿JOIN十亿的情况,SQL能查出来,但是可能要查个几分钟,这样子是完全不能作为ADHOC的性能来给用户提供服务的。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在这个情况下,甚至用户可能还会想加入几张尾表,这样的表格数据量会更多,查询会更加麻烦。我们想用ClickHouse去看看应该怎样去解决这样的问题。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/d7\/d755660f8d92ffeab23d1235e02c7683.jpeg","alt":"图片","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"分桶join字段"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在这种情况下,我们用了类似于分桶概念。首先把左表和右表join的字段,建表时用hash来落到不同的机器节点,murmurHash3_64(mid)。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"如果是写入分布式表的话,在建表的时候直接指定这个表的规则是murmurhash字段,insert into分布式表,会自动按照 murmurhash的规则,通过 ClickHouse来写入不同的表。如果是写入本地表,在Flink写入段路由策略加入murmurhash即可。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"总而言之不管你从哪里写,分桶规则一定要保持一致。在查询的时候,使用分布式表JOIN本地表,来达到想要的效果。如上图右边的 SQL,左表-all是分布式表,JOIN右边的-local表,并不是用-all表去JOIN-all表,这样就达到了分桶的效果。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/81\/814cd251b4230169eea1bb3817b5b748.jpeg","alt":"图片","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"这样分桶后JOIN的效果,是等于分布式表JOIN分布式表,我们做了多次验证,且处理的数据量只有总数据量\/(集群机器数\/副本数)。比如我们有20台机器,然后每个机器有两个副本,我们看一下input size,等于总数据量除以10,那么20台机器有2个副本的话就有10个不同的shard。而且这个是可以通过扩容去进一步缩小数据量的。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"所以我们第一步做的就是把JOIN的数据量放小,这样子的话JOIN写出来以后,它也是在本机上进行,根本不用走分布式表,只是在最后聚合运算的时候再去做分布式。值得注意的是,即在左表JOIN右表的过程中,如果左表是子查询,则分布规则不生效,查询出的结果也远远小于预期值。等于本地表JOIN本地表,对分布式没有要求。如上图所示,SELECT)左边是子查询SELECT*FROM tableA_local,其实等同于SELECT*FROM tableA_all,这个地方如果子查询写了all跟local其实效果是一样的,它都是当成local表来做。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"这个是ClickHouse的自己的语义的解释,所以我们也不去评判它对不对。但它确实不同的语义查出来的效果也是不同的。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"所以在用这种分桶效果的时候,要注意一下左表是不能用子查询的,如果用子查询,10个shard查出来最后只有1\/10左右,结果是不准确的。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"4、增量数据更新场景"}]},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/85\/8569885da97a1a65fb3e06c4b7e47cee.jpeg","alt":"图片","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"订单类数据需要像写Kudu或者MySQL一样,做去重,由于流量数据都实时写入数据,为了订单数据和流量数据做JOIN,就需要对订单数据做去重,由于订单数据是有生命周期的,从产生之后会不停地update。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我们用了一下这4种方案去验证了ClickHouse,看看整个ClickHouse的性能如何:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" ReplacingMergeTree数据无法merge,忽大忽小,不能满足需求。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"带副本的Mergetree可以做去重。对Hash字段不变化的情况下合适。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"remote表。查询复杂,对性能有影响,存在副本的可靠性问题,但结果是准确的。"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Flink方案规避去重和Hash字段变化的问题。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"总而言之,ClickHouse并不是一个主打更新功能的引擎,有选择的使用,针对需求选择不同的使用方式,才比较合适。使用ClickHouse的更新的时候,个人建议要谨慎使用。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"5、Flink写入端遇到的问题及优化"}]},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/ea\/eae65a7a04c946b35cc4524b581a69a3.jpeg","alt":"图片","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"需要注意:"},{"type":"text","text":"这样是为了满足两个场景,第一个是高写入场景速度快,可能21万条记录先到达,还有就是夜里可能打点数据很小,60秒会产生一个半时再往里写,这样也是为了一个低延时,我们也是会选择60秒这样的场景做兜底。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/b6\/b684dda66e51da8af234e09da5870700.jpeg","alt":"图片","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"需要注意:"},{"type":"text","text":"coalesce空值处理函数是在Flink SQL里面加上的,这样子就能保障数据在sink的时候是完整的,这个是比较准确的,也是我们比较推荐的一种方法。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"三、使用ClickHouse中遇到的问题和挑战"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"1、ClickHouse查询优化"}]},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/3f\/3fd1a1b344d5f6a7ec6d09a9102bdbbe.jpeg","alt":"图片","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"根据业务和数据特性选择合适的引擎,根据副本、Merge、更新之类的场景,选取表引擎。ClickHouse表引擎选择好,能达到事半功倍的效果,而且选不选副本对数据的稳定性都很重要。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"做好数据的分级分区,做好第一级分区、二级分区,很多时候分区是拿来运用数据的,比如日期分区、typed种类等,都会起到第一层索引的作用,而且效果比索引的好,这样能方便快捷地找文件、找数据。最好是在分区数和分区内的part文件取一个平衡,一个分区的Part数也不宜过多,过多的话会导致扫描效率低下。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"根据SQL特性,我们会去做order by的排序。类似于上图就是order by的tape,做OLAP都知道,降低扫描的数据量对提升效率的加成是非常大的,这里也是为了减少查询主要的数据量,所以索引是非常重要的。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"做好生命周期,生命周期是用TTL ,这个ClickHouse建表的时候也是自带的,你看这里我们就是用了它自己的dt,dt是一个date类型,看到这两个是个 data类型,然后加一个间隔,32天的间隔做一个生命周期,生命周期也能省很多事情。这样的话历史数据等于也不会太多,我们也可以把历史数据导到其他的存储里面去,或者冷数据里面去做这个事情。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"索引的粒度的设置,默认是8192,但是在一些少行多列或者存在一些列特别大的表中就不是正式的,比如Bit map,它是一行是一个标签,但是他Bit map一个字段非常大,可能是几十G,一列就是几十G,但是它一个表可能也就一两千行,这个时候值就要设置小一点。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"2、常用参数调整"}]},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/80\/807838476626ce960eee7526f8873c98.jpeg","alt":"图片","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/96\/961e9ecc5e8b20b71c44cfab0f0b74d7.jpeg","alt":"图片","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在正式的ClickHouse实践中,我们也一步步修改了ClickHouse默认的一些参数。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"正如图上面针对查询和merge的并发度、内存设置的参数,配置都在config或者user.xml这些配置文件里面,针对一些特殊场景,可以用session来设计特殊参数,就不是全局的参数。这个参数的设置情况也可以在CK的系统表里面去查到。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"第二个是merge的一些参数,包括它的后台线程数,这个也是按照CPU设的,根据你CPU的核数来设置线程数,这个都是根据业务和机器数去调优,然后再去写一些通用的SQL去测试。我们也是在很多次测试后设置了这么一个参数,现在性能也达到了一个比较满意的效果。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"3、物化视图"}]},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/39\/3920274ffe0b2980df21a656c9fe93f9.jpeg","alt":"图片","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"说到ClickHouse就必须说说物化视图,因为ClickHouse的物化视图是一种查询结果的持久化,它确实是给我们带来了查询效率的提升。用户查起来跟表没有区别,它就是一张表,它也像是一张时刻在预计算的表,创建的过程它是用了一个特殊引擎,加上后来 as select,有点像我们写的etl-join,熟悉etl-join的同事都知道,create一个table as select的写法。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"ClickHouse的物化视图就是通过这种方式来表达所需要的需求和规则。如果历史数据也需要进行初始化,需要加上一个关键字popular,但是这种创建的时候你要注意原表在往物化视图计算历史数据的时候,你要停止写,要不然他popular生成物化视图去追历史数据这段时间,你新写入的数据它是不会计算难度的,包括以后也不会统计这段时间你写的数据。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"所以介入历史数据加popular的时候,在物化视图完成之前,不要写入,这是一个注意事项。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"需要特别注意的就是我们在使用物化视图的过程中,也体会到它的一些优点和缺点。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"优点:查询速度快,要是把物化视图这些规则全部写好,它比原数据查询快了很多,总的行数少了,因为都预计算好了。"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"缺点:它的本质是一个流式数据的使用场景,是累加式的技术,所以要用历史数据做去重、去核,或者是一些这样分析,在物化视图里面是不太好用的。在某些场景的使用也是有限的。而且如果一张表你加了好多物化视图,在写这张表的时候,就会消耗很多机器的资源,你可能会发现你写数据的时候突然数据带宽码,或者是存储一下子增加了很多,你就会需要看一下整个物化视图的情况。"}]}]}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"四、展望"}]},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/infoq\/bc\/bc80f8b124a8b9285120bac52b108bd5.jpeg","alt":"图片","title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"1、RoaringBitmap留存分析"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"用RoaringBitmap在人群标签的场景加速流程分析,利用Bit map能提升很多空间,而且能减少许多存储代价。可以用ClickHouse里自带的Bit map and Bit map all这种云或非的计算的定义性,来完成业务的需求。比如一个标签, A标签有1亿个人群,它就是一个Bit map,B标签它有一个5000万人群,我们想要同时满足A和B标签,只要用一个Bit map and把这两个Bit map往里塞,它直接就可以告诉你估算的数值,效率很高。Bit map对于几百万、几千万这样的量级,他可能10秒之内就能返回。如果用其他的方式想把Presto写出口,SQL又麻烦表又大扫描的行数又多。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"2、存算分离"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我们准备找一个合适的分布式存储去研究。例如Juice FS,这是我们国内开源的非常优秀的文件系统。还有一个S3,前段时间在ClickHouse官方放出来的实践,就是用亚马逊S3来做一个底层的分布式存储,这样做一个分布式存储是为了解决因存储而频繁扩容ClickHouse集群的问题。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"因为ClickHouse是单机性的,所以扩容起来比较麻烦,需要把老数据也拿过去搬。但鉴于老数据跟新数据的一些规则不一样,在扩容的时候,新机器和老机器的数量会不一样,它murmurhash的结果也不一样,放到集群也不一样。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"而研究存在分离云存储这样的方法,虽然我们要用的机器数不变,但却可以解决上述的问题。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"3、资源开发工具"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"目前我们属于ClickHouse业务推广阶段,对ClickHouse使用方管控较少,也没做过多的储存、计算、查询角色等方面的管控。数据安全乃大数据重中之重,我们将在接下来的工作逐渐完善这一块。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"4、权限管控"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在ClickHouse的新版中,已经加入了RBAC的访问控制,官方也推荐使用这种方式。我们也想把这一套授权用到我们自己的ClickHouse的集群里面来。"}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"5、资源管控"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在资源层面我们会结合存算分离,给不同的业务分配不同的用户,不同的用户在云平台上申请的存储资源,我们就可以对每个用户的储存进行价值计算。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"以上5点是我们未来准备在ClickHouse这个方向上面继续要投入去计算的事情。像Bit map的话,我们现在已经在与广告、人群还有实验平台在深度的合作当中了,以后有机会再跟大家分享。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"Q1:ClickHouse数据入库的时延是多少?能达到秒级吗?在秒级时延下,单节点入库的吞吐量是多少?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"A1:"},{"type":"text","text":"这里看你的ClickHouse用什么去写入,如果你是直接到GEPC端口插入的话,是可以达到秒级的,但是他主要是消耗在merge部分。这个是看你怎么用,如果你是用大数据的场景,用海量打点这样的数据,再用GEPC去查询的话就不太合适,它是达不到秒级这样的场景。但如果你是把它当成一个小存储,比如写入百万级这种的,它的merger也发挥不了什么作用,这也是能达到秒级的。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"单节点入库的吞吐量是多少?其实这是要看是它的数据量条数还是它整个的量级,像Flink用sink去写入,我们达到的TPS峰值基本上是60万GPS左右,分到10台机器,每台机器大概是六七万TPS左右。这可能和机器的带宽、CPU、io和底层存储是不是SID这方面有关。。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"Q2:请问OLAP场景下都有哪些存储空间和访问效率优化方法?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"A2:"},{"type":"text","text":"首先看一下OLAP场景你是用哪一种引擎,是哪一种存储,像我们现在用的比较多的就是Presto,我们主要是用它的压缩或者是HyperLogLog,这个是Spark和Presto都有的,但是不通用。我们是在Spark上把Presto HyperLogLog的类进行重构,所以说用Spark写入,然后用Presto去做 HyperLogLog,这样其实也是用一种预计算或是把储存的预计算和想要的结果数据场景来实现存储空间的优化,访问效率的话其实主要就是扫描数据量,第二就是SQL的优化。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我们主要是关注,根据这些SQL的explain去看它到底是慢在哪里,是慢在读取数据这一步,Presto的SOURCE这一步有分schedule和plan,还是慢在HMS一步,像我们最近做的就是把HMS搞组件分离。Presto这边的话,我们有etl,写这些的话是写主库的HMS,读的话是读从库的HMS,这样的话当你写的时候就不会影响读的效率,adhoc写的就非常快,所以说这个问题本身还是很大的,每一个细节还值得深挖。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"Q3:请问Kudu和CK的区别在哪?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"A3:"},{"type":"text","text":"CK本身是带了一套非常强大的查询引擎,Kudu据我所知,要不就是用 Impala.,要不就是用Presto等引擎去查。CK本身还是一个单机的储存模式,其实是用分布式表的概念,把这些单机的数据放在一起,然后去做聚合。它主要用的还是单机计算的效果。Kudu的话本身更多地偏向于存储性能,偏向于实时数仓,像update、upset这些。像人群Bit map场景,我们可能只能用CK这一套,用它的储存和算法,用它Bit map的语法function,这时候用Kudu就不太合适了。选型的时候会根据不同的业务场景来进行不同的架构选择。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"Q4:请问ClickHouse和Flink如何保证一致性?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"A4:"},{"type":"text","text":"我们这边的话,是Kafka首先会保留三天的数据,包括offset会自动保存。我们本身是实时这边一套,离线那边一套,我们会去监控离线那边五分钟的表,然后和ClickHouse做数据质量校验,如果发现数据不同,会看一下差异在哪些地方。在ClickHouse这边也会有分区把数据删了做一个重建,目前看来数据还是比较准确的。大部分数据不准确是在Waterdrop中,但是用FlinkSQL写入的话,现在看来Flink Connector的性能和准确性还是够的。另外我们已经做好了跟离线活动表的JOIN,这个也可以用来判断。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"Q5:请问Kudu在唯品会还有使用场景么?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"A5:"},{"type":"text","text":"我们Kudu主要是做了订单数据,打点数据曝光数据很少上去,只有一定的曝光数据上去,我们Kudu的话机器不是特别多,只有不到20台,三个Kudu master,其他都是TABLET,主要是把数据通过Flink落到Kudu里面,然后用Presto来查,用Presto把kudu跟一些其他的Hive或MySQL的一些维表来JOIN,基本上就是订单和很少的一部分曝光场景在用。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"Q6:请问存算分离有什么方案?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"A6: "},{"type":"text","text":"存分离的话,现在用Juice FS非常合适,大家可以去看一下,这个是国内的开源的项目,他们也非常愿意跟ClickHouse来做这件事情。第一个情况就是把这种分布式的存储,直接当成本地存储的挂载,通过这种方式来实现。第二个就是更深度定制,就是通过写入文件流或者是那种outsteam流去改一些CK的源码,来对接FS,直接把Juice FS这种文件系统的FS类拿过来,直接在ClickHouse源码里面去用。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"如果刚开始不是深度使用的话,可以用第一种挂载方案来试一下。第二,如果有源码修改的能力,有意愿的公司可以去试一下。主要是在FS、Stream这种文件流的写入和读取这一块要改一下源码。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"Q7:请问ClickHouse 和Kylin 如何选择?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"A7: "},{"type":"text","text":"ClickHouse 和Kylin 差别还是比较大的。首先kylin是通过Hive预计算以后,把数据放到Hbase里面,在最新的Kylin 4.0的话,它是用Spark去做一些事情。ClickHouse它本身是用C写的,所以说他整个在CPU的SIMD这个层面,是有优势的。而且ClickHouse数据就存在那里,你会用不同的规则,不同的汇聚条件,更多的是根据更灵活的条件做查询adhoc的场景。像Kylin的话,我们更多的愿意把它作为固定报表或者预计算的场景,两个的使用方式可能还是不太一样的。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"Q8:请问多表关联性能如何?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"A8: "},{"type":"text","text":"刚才我们看到很多表,如果你是用分布式表JOIN分布式表,不用我们这种分桶方案,它的JOIN性能是不行的,因为他的理论是把所有分布式表的数据,拿到一台单机上来做JOIN,这个效果是不好的。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"但是像我跟大家分享的,我就是通过一些具体的主键ID或者其他东西,把它进行分桶,就是a表、b表、c表三个表JOIN,我们把三个表的某一个表的user这个字段都是按照一定规则落到这台机器,本地做一个JOIN,10台机器都把它本地化做起来,最后再通过整个分布式表,把他们放在一起。这样子就形成了一个多表JOIN,就是刚才我分享的,大家可以看一看PPT里面说用分桶来做这种JOIN,如果是分桶不了的话,你多表分布式JOIN的话,性能是比较一般的。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"Q9:请问ClickHouse可不可以解读Kafka?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"A9: "},{"type":"text","text":"可以,但是我们Flink里面还做了很多事情,比如数据处理,我们是通过Flink SQL做的,包括关联维表,关联Redis维表这种你直接链kafka引擎是做不了的事情。我们是有这种需求,就会去做这样的事情,能在Flink里面实现实时维表的这些功能的部分就在Flink里面做掉,减少ClickHouse这一端更多的关于维表关联和其他的工作。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"Q10:请问CK的并发不好,如何解决2c场景?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"A10: "},{"type":"text","text":"目前解决方式就是加机器,用不同的chproxy连到不同的机器上,然后再用不同的chproxy去扛,chproxy最终是会平均打到这么多台机器上,所以你机器越多,它并发的性能越好。但是它有一个瓶颈,就是如果2c的话,看你们公司的整个场景是如何的,如果是像淘宝这样的2c,我觉得是不太合适的。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"Q11:请问Doris有什么缺点,才采用ClickHouse替换?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"A11: "},{"type":"text","text":"目前解决方式就是加机器,用不同的chproxy连到不同的机器上,然后再用不同的chproxy去扛,chproxy最终是会平均打到这么多台机器上,所以你机器越多,它并发的性能越好。但是它有一个瓶颈,就是如果2c的话,看你们公司的整个场景是如何的,如果是像淘宝这样的2c,我觉得是不太合适的。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"Q12:请问Doris有什么缺点,才采用CK替换?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"A12: "},{"type":"text","text":"我们替换并不是说Doris有什么缺点,更多的是ClickHouse有优点。像Flink这种维表的场景,在ClickHouse里面用大宽表JOIN的场景,包括后来Bit map场景,我们是根据场景需要用到ClickHouse,而且也不太想多维护的原因才用ClickHouse替换,因为Doris是能实现的ClickHouse也能够实现,比如指标等。我们现在的监控日志用ClickHouse也能做,像有些ClickHouse能做,但Doris做不了,比如Bit map。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Doris本身引擎没有什么问题,是因为有更好更实用的场景,ClickHouse能做更多的事情,所以我们也是基于这些考虑加过去ClickHouse,把原来的一些东西都写到ClickHouse上去也可以减少维护成本。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"嘉宾介绍:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"王玉,"},{"type":"text","text":"唯品会实时平台OLAP团队负责人。负责唯品会Presto、ClickHouse、Kylin、Kudu等OLAP组件的开源修改、组件优化和维护,业务使用方案支持和指引等工作。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"本文转载自:dbaplus社群(ID:dbaplus)"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"原文链接:"},{"type":"link","attrs":{"href":"https:\/\/mp.weixin.qq.com\/s\/JNUPz9G7cxNxTVUwvIi1nQ","title":"xxx","type":null},"content":[{"type":"text","text":"唯品会翻牌ClickHouse后,实现百亿级数据自助分析"}]}]}]}
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章