Kylin 在贝壳的性能挑战和 HBase 优化实践

{"type":"doc","content":[{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"Kylin 在贝壳的使用情况介绍"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/eb\/0b\/ebb1c80c84c1c260dc40eef84753b90b.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Kylin从2017年开始作为贝壳公司级OLAP引擎对外提供服务, "},{"type":"text","marks":[{"type":"strong"}],"text":"目前有100多台Kylin实例;有800多个Cube;有300多T的单副本存储;在贝壳 Kylin 有两套HBase集群,30多个节点,Kylin每天的查询量最高2000+万"},{"type":"text","text":" 。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我们负责 Kylin同事张如松在2018年Kylin Meetup上"},{"type":"link","attrs":{"href":"http:\/\/mp.weixin.qq.com\/s?__biz=MzAwODE3ODU5MA==&mid=2653078369&idx=1&sn=255f18ed718912fda53cabdd50afdd7d&chksm=80a4bd90b7d33486036c8dbe3a84df5cb638eb532057940cca7358a3f6f8893696b6d0680a6b&scene=21#wechat_redirect","title":"","type":null},"content":[{"type":"text","text":"分享过Kylin在贝壳的实践"}]},{"type":"text","text":",当时每天最高请求量是100多万,两年的时间里请求量增加了19倍;我们对用户的查询响应时间承诺是3秒内的查询占比要达到99.7%,我们最高是达到了99.8%。在每天2000+W查询量的情况下,Kylin遇到很多的挑战,接下来我将为大家介绍一下我们遇到的一些问题,希望能给社区的朋友提供一些参考。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"Kylin HBase优化"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"表\/Region不可访问"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"1)现象"},{"type":"text","text":":"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/b9\/88\/b9e5437203c8bd7f2ccfaa8718315088.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"凌晨构建Cube期间,会出现重要表的某个region不可访问导致构建失败的情况,右上角的图是HBase的meta表不可访问的日志;白天查询时也有部分查询因为数据表某个Region不可访问导致查询超时的情况,右下角的图是查询数据表Region超时的日志;另外一个现象是老的Kylin集群Region数量达到16W+,平均每台机器上1W+个Region,这导致Kylin HBase集群建表和删表都非常慢,凌晨构建会出现建表卡住的现象,同时清理程序删除一张表需要三四分钟的时间,面对这样的情况,我们做了一些改进。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"2)解决方案"},{"type":"text","text":":"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/c9\/b6\/c9d18f074c626038d078c4f189eyy1b6.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"删除无用表减少Region。"},{"type":"text","text":" 通过刚才的介绍HBase集群平均每台机器上1W+个Region,这对于HBase来说是不太合理的,另外由于删除一张表需要三四分钟的时间,清理程序也执行的异常缓慢,最后我们不得不使用了一些非常规手段删除了10W+个Region。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"缩短清理周期,"},{"type":"text","text":" 从之前的一周清理一次HBase表到每天清理一次,除此之外Kylin会每周合并一次Cube来减少HBase表数量从而减少Region数量,最终16W+的Region删到了不到6万,至此我们解决了一部分问题,还会存在构建时重点表的Region不可访问的情况。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"将HBase从1.2.6升到1.4.9,"},{"type":"text","text":" 主要是想要利用RSGroup的能力来做重点表和数据表的计算隔离;"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"关闭HBase自动Balance的功能,"},{"type":"text","text":" 仅在夜间业务低峰期开启几个小时;"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"使用HBase自带的Canary定期的检测Region的可能性,"},{"type":"text","text":" 如果发现某些Region不可用马上发送告警"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"使用RSGroup单独隔离重点表来屏蔽了计算带来干扰,"},{"type":"text","text":" 这些重点表包括HBase Meta表、Acl表、Namespace表、Kylin_metadata表。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"经过了这一系列的改进之后表\/Region不可访问的问题基本上解决了,现在基本上没有再出现Region不可访问的情况。解决这个问题我们花费了很长时间,经历了升级重启和删了大量的表后,我们遇到了另外一个问题。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"RS数据本地性提升"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"1)现象"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/f6\/4e\/f60ec60a57a6b809aec55539ea66d44e.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"Kylin HBase集群的RegionServer数据本地性非常低,只有20%不能很好的利用HDFS短路读,这样对查询响应时间产生了一定影响"},{"type":"text","text":" ,我们三秒内的查询占比出现了下降。了解HBase的朋友都知道如果RS的数据本地性较低,有一种解决方案就是做Compact把数据拉到RegionServer对应的Datanode上,考虑到大规模的做Compact会对查询造成很大影响,我们没有这么做,跟Kylin的同学沟通后发现绝大多数的Cube每天会使用最新构建的表,查旧表的可能系不是特别大,所以提升每天新建表的数据本地性就可以了, 具体我们是这样做的。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"2)解决方案"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/0b\/c7\/0b155ab3a4ac54a4e9989ee76018bdc7.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我们发现Kylin用到的是HFileOutputFormat3跟HBase的HFileOutputFormat2是有一些差别的,我们在HFileOutputFormat3里面加入了HBASE—12596的特性,这个特性主要是生成HFile的时候会写一份数据的副本到Region所在的RegionServer对应的Datanode上。下面是一些代码细节,程序会先取到这个Region所在的机器,然后再获取Writer时,把这台节点的信息传递过去,最后写数据的时候会写一个副本到这个Region对应的Datanode上, "},{"type":"text","marks":[{"type":"strong"}],"text":"这样逐渐我们的数据稳定性就提上来了,现在看了一下基本上在80%多左右。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"RegionServer IO瓶颈"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"1)现象"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我们发现在构建早高峰时,HBase响应时间的P99会随之升高的,通过监控发现是由于 RegionServer机器的IO Wait偏高导致的。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"还有一种场景是用户构建时间范围选择过大,导致网卡被打满,之前有个用户构建了一年的数据,还有构建三四个月数据,这两种情况都会造成RegionServer机器IO出现瓶颈导致Kylin查询超时。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/c3\/6a\/c30b8214d926e83d42401fa6633f186a.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"上图是Cube数据构建流程,首先HBase集群和公司大的Hadoop集群是独立的两套HDFS集群,每天构建是从大集群的HDFS去读取Hive的数据,构建任务直接输出HFile到HBase的HDFS集群,最后执行Bulkload操作。由于HBase HDFS集群机器较少,构建任务写数据过快导致DataNode\/RegionServer机器IO Wait升高,怎么解决这个问题呢?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"2)解决方案"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/5e\/08\/5e9f5dd067d7350b7276ae938fe47308.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我们想用HBase比较常用的方式就是DistCp来解决这个问题,左下角这张图是我们的改进方案,就是我们设置构建任务的输出路径到Hadoop的大集群,而不是到HBase的HDFSB及群,再通过DistCp限流的拷贝HFile到HBase的HDFS集群,最后做Bulkload操作。之前提到我们有800多个Cube,并不是所有的Cube都需要走这套流程, "},{"type":"text","marks":[{"type":"strong"}],"text":"因为限流拷贝的话肯定会影响数据的产出时间,我们设计了针对Project或者是Cube设置开启这个功能,"},{"type":"text","text":" 我们通常会对数据量比较大的Cube开启DistCp限流拷贝,其他Cube还是使用之前的数据流程。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"中间这个图是一个构建任务的截图,第一步是生成HFile,第二步是DistCp,最后再Bulkload,这个功能我们新增了一些配置项,比如说第一个是否开启DictCp,第二个是每个Map带宽是多少,再有就是最大有多少Map。通过这个功能我们基本上解决了构建高峰IOWait会变高的情况。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"慢查询治理–超时定位链路优化"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"1)现象"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/7f\/5d\/7f5bda792c6f08db277bc5326c428e5d.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"慢查询治理遇到的第一个问题就是超时定位链路特别长。我们收到Kylin报警时首先会想知道:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"是哪个Cube超时了?"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Cube对应的HBase表是哪个?"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"是Region不可用还是查询方式变了?"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"之前提到有一段时间经常出现Region不可用的情况,一旦出现超时我们查询链路是什么样的呢? 可能我们先去看HBase日志里看有没有Deadline has passed的警告日志,有这种报警的话我们会拿到它的QueryID,然后去ES或者是Mysql里面去查询这个QureyID对应的Cube信息和SQL,知道这些信息之后,还需要去到超时的Kylin节点上去查询日志,从日志里面才能找到是查询哪个HBase表的哪个Region超时,然后再去判断是不是Region不可用了,或者是查询方式改变。 "},{"type":"text","marks":[{"type":"strong"}],"text":"这个链路非常长,每次都需要HBase和Kylin的同学一块儿来查。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"2)解决方案"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/4f\/2d\/4f1d23dc4c7e4c9d0065ee20661cbd2d.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"针对这个痛点给我们做了如下改进:我们直接把Cube信息和Region的信息打在HBase的日志里。"},{"type":"text","text":" 中间这个黑色的部分就是HBase的日志,我们可以看到这个查询已经终止了,Cube的名字是什么,Region的名字是什么,下面的白色部分是通过天眼系统配置的报警信息,这个报警是直接报到企业微信的,我们能马上知道这个Deadline涉及的Cube和Region,能马上做一个检测是这个表不可用了还是查询方式改变了,大大节省了定位问题的时间。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/6e\/34\/6e0dc3efbf76833a48e2c46byy6a1134.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"这个是为了解决超时链路过长我们对Kylin做的一些代码改动,首先我们在Protobuf文件中加了一个segmentName字段,然后在协处理器类中获取了Region名字,在协处理器调用checkDeadLine方法检查时传入segmentName和regionName,最后日志会打印出来segment名称和Region的信息。 "},{"type":"text","marks":[{"type":"strong"}],"text":"这个功能已经反馈给社区了,见:"},{"type":"link","attrs":{"href":"https:\/\/issues.apache.org\/jira\/browse\/KYLIN-4788","title":"","type":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"https:\/\/issues.apache.org\/jira\/browse\/KYLIN-4788"}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"慢查询治理-队列堆积定位"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"1)现象"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/02\/y0\/02edd1f4901af485ebf1d11yya292yy0.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"有一天我们发现Kylin HBase RegionServer队列堆积非常严重,RegionServer的P99的响应时间已经达到了10多分钟的级别,大家看右上角是HBase关于队列的监控情况,一些机器的堆积已将近3W。我们当时非常疑惑,因为Kylin和HBase之间RPC的超时时间是10秒,在10秒之后Kylin和HBase的连接都已经断开了,HBase到底处理什么查询,右下角是HBase RegionServer UI页面的截图,在这个截图里我们发现一些查询其实已经执行了快半个小时了,这半个小时是在执行什么呢?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"2)解决方案"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/41\/66\/41ab16c1c803f38b4b15fccf687e7f66.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我们当时的解决方案是去任务堆积的有队列推积的RegionServer上去看日志,通过查询开始时间结束时间做差值,找出查询时间最长的Top10的查询,通过QureyID匹配出Cube和具体的SQL,最终我们发现一般这种查询时间特别长都是因为查询方式的变化与原来Cube设置的Rowkey不相符导致了全表扫描。最终的方案其实查出来之后Kylin的同学会去调整Cube的Rowkey设置,然后重新构建。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"这种离线的定位的方式其实不是特别好,一开始我们想基于日志做实时报警,这样能帮助我们更快的发现和定位问题,但是后来想想这也是比较被动的一种方式,这只是发现问题,不能彻底解决这个问题。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我们后来想的一个方案是SQL作执行之前可以为SQL打分,评分过低的就拒绝执行,这个功能还没有实现。有这个想法是因为当我们找到SQL信息后, Kylin的同学是可以看出来查询是不是不合理,是不是跟Rowkey设置不符,我们想以后做这样一个功能,把人为判断的经验程序化,在SQL没有执行之前就把潜在的风险化解掉。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"慢查询治理 – 主动防御"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/01\/56\/01ec3349e16dec165a1f7a3a5094b456.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"慢查询治理还有一个举措是Kylin的主动防御。我们发现有大量的耗时较长的查询会占据请求队列,影响其他查询的响应时间。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"解决方案是通过Kafka收集Kylin的日志,经过天眼系统实时清洗后写入Druid,通过Druid做统计分析,如果某个业务方\/Cube在一定时间内超过3秒的查询到达一定的阀值,主动防御系统会把这个业务方\/Cube的查询超时时间设置为1s,让较慢的查询尽快超时,避免对正常查询的干扰。右边就是我们整个流程的一个架构图,主动防御对慢查询治理有一定的作用,但全表扫描的情况还是没有办法完全避免。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"重点指标查询性能保障"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"1)现象"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/68\/42\/681d0c9bed73a55b27ea2a797f484042.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"另外一个举措是对重点指标的查询性能保障。早期HBase集群只有HDD一种存储介质,重点指标和普通指标都存储在HDD上,非常容易受到其他查询和HDD性能的影响,重点指标响应时间无法保障。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"2)解决方案"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我们的解决方案是利用了HDFS的异构存储,给一部分DataNode插上SSD,将重点Cube的数据存储在SSD上,提升吞吐的同时与普通指标数据做存储隔离,这样就既避免了受到其他查询的影响,也可以通过SSD的性能来提升吞吐。引入SSD只是做了存储的隔离,还可以通过RSGroup做计算隔离,但由于重点指标的请求量占到了集群总请求量的90%以上,单独隔离出几台机器是不足以支撑这么大请求量的,所以最终我们并没有这么做。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"最后是我们得出的一些经验, "},{"type":"text","marks":[{"type":"strong"}],"text":"SSD对十万以上扫描量查询性能提升40%左右,对百万以上扫描量性能提升20%左右。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/cb\/e2\/cb3c31d8322ac732f9895e9ae1a713e2.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"这是我们用SSD做的一些改动,数据存储在SSD是可针对Cube设置的。我们可以指定哪些Cube存在SSD上,构建任务建表时会读取Cube的配置,按照Cube配置来设置HBase表的属性和该表的HDFS路径存储策略。在DistCp拷贝之前也要先读取Cube的配置,如果Cube的配置是ALL_SSD,程序需要设置DistCp的目的路径存储策略为ALL_SSD,设置完成后再进行数据拷贝。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"这样做的目的是为了避免Bulkload后数据还需要从HDD移动到SSD,移动数据会带来什么影响呢?"},{"type":"text","text":" 我们发现如果不先设置DistCp目的路径存储策略的话,数据会被先写到HDD上,Bulkload后由于表的HDFS存储路径存储策略是ALL_SSD,Hadoop的Mover程序会把数据从HDD移动到SSD,当一个数据块的三个副本都移动到SSD机器上后,RegionServer不能从其缓存该数据块的三台DataNode上读取到数据,这时RegionServer会随机等待几秒钟后去向NameNode获取该数据块最新的DataNode信息,这会导致查询响应时间变长,所以需要在DistCp拷贝数据之前先设置目的路径的存储策略。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"JVM GC瓶颈"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"1)现象"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/07\/d2\/07242fb1bb9647c0e7ea494343f707d2.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我们遇到的下一个问题就是RegionServer的JVM GC瓶颈。在查询高峰期Kylin HBase JVM Pause报警特别频繁,从这张图里面可以看到有一天已经超过1200个。Kylin对用户的承诺是三秒内查询占比在99.7%,当时已经达到了99.8%,于是我们就想还需要优化哪一块能让3秒内查询占比达到99.9%,这个JVM Pause明显成为我们需要改进的一个点,大家做JAVA基本都知道JVM怎么去优化呢?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"2)解决方案"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/af\/2a\/af322fb83b6220d6b7fc4ddd91ba102a.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"首先可能会想到调整参数,其次就是换一种GC算法,我们采用了后者。"},{"type":"text","text":" 之前我们用的是JDK1.8,GC算法是G1,后来我们了解到JDK11推出了一个新的算法叫ZGC。最终,我们把JDK从1.8升级到JDK13,采用ZGC替代了原有的G1。右上角的图是ZGC上线后,这套集群RegionServer 的JVM Pause的次数几乎为0,右下角的GC时间也是相比之前降低特别多。ZGC有一个设计目标是Max JVM Pause的时间在几毫秒,这个效果当时看着是比较明显的,左边的图是天眼系统的报警的趋势图,ZGC上线后JVM Pause报警数量明显降低。关于ZGC我本月会发一篇文章介绍ZGC算法和我们做了哪些改动来适配JDK13,这里就不详细介绍了。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"作者介绍"},{"type":"text","text":":"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"冯亮,贝壳找房高级研发工程师。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"本文转载自公众号apachekylin(ID:ApacheKylin)。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"原文链接"},{"type":"text","text":":"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"link","attrs":{"href":"https:\/\/mp.weixin.qq.com\/s?__biz=MzAwODE3ODU5MA==&mid=2653081715&idx=1&sn=38e7a698feaa8889a37eb65615a0d69b&chksm=80a4ae82b7d3279489ba780ac2ce63f04938a7c95840a45d034648871333e03d734476f93ef3&token=1340822333&lang=zh_CN#rd","title":"","type":null},"content":[{"type":"text","text":"Kylin 在贝壳的性能挑战和 HBase 优化实践"}]}]}]}
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章