hbase运维故障案例分析

{"type":"doc","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在实际运维HBase集群时,各位小伙伴总会遇到RegionServer异常宕机、业务写入延迟增大甚至无法写入等类似问题。本章结合笔者的经验、列举真实生产线环境常见的几个问题,并介绍这些地问题的基本排查思路。同时,重点对HBase系统中的日志进行梳理介绍,最后对如何通过监控、日志等工具进行问题排查进行总结,形成问题排查套路,方便读者进行实践。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"regionserver宕机","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":3},"content":[{"type":"text","text":"案例一: 长时间GC导致Regionserver宕机","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"长时间FullGC是RegionServer宕机最常见的原因.分析这类问题,可以遵循如下排错过程:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"现象:","attrs":{}},{"type":"text","text":"收到Regionserver进程退出的报警。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"numberedlist","attrs":{"start":null,"normalizeStart":1},"content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":1,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"宕机原因定位","attrs":{}}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"步骤1:","attrs":{}},{"type":"text","text":" 通常在监控上看不出,需要到事发的RegionServer日志直接搜索2类关键字---a long garbage collecting pause 或ABORTING region server。对于长时间Full GC的场景,搜索第一个关键字会检索到:","attrs":{}}]},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"     2019-06-14T17:22:02.054 WARN [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 20542ms\n     GC pool 'ParNew' had collection(s): count=1 time=0ms\n     GC pool 'ConcurrentMarkSweep' had collection(s): count=2 time=20898ms\n     2019-06-14T WARN [regionserver60020.periodicFlusher] util.Sleeper: We slept 20936ms instead of 100ms, this is likely due to a long garbage collecting pause and it's usually bad, see http://hbase.apache.org/book.html#trouble.rs.runtime.zkexpired\n    ","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"步骤2:","attrs":{}},{"type":"text","text":" 通常CMS GC策略会在2种场景下产生严重的Full GC ,1. Concurrent Mode Failure 2. Promotion Failure。","attrs":{}}]},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":"   2017-06-14T17:22:02.054+0800:21039.790[FulGC20172017-06-14T17:22:020544+0800:21039.790: [CMS2017-06-14T17:22:02.0544+0800:21041.477: [CMS-concurrent-mark:1767/1782 sec][Times: user=14.01 sys=0.00 real=1.79 secs](concurrent mode fallure): 25165780K->25165777K(25165824K), 18.4242160 secs] 26109489K->26056746K(26109568K), [CMS Perm: 48563K-48534K(262144K), 18.4244700s secs][Times: user=28.77 sys=0.00 real=18.42. secs]]\n   2017-06-14T17:22:20:47340800:21058.215:Totalime for which appll cation threads were stopped:184270530 seconds","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"现在基本可以确认是由于concurrent mode failure模式的CMS GC导致长时间应用程序暂停。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"2. 宕机原因分析","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"故障因果分析: JVM触发的concurrent mode failure模式的CMS GC 会产生长时间的stop the world,上层应用因此长时间暂停。进一步导致RegionServer与Zookeeper之间建立的session超时。session一旦超时,Zookeeper就会通知Master将此宕机RegionServer踢出集群。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"什么是concurrent mode failure模式的GC?为什么会造成长时间暂停?假设HBase系统正在执行CMS回收老年代空间,在回收的过程中恰好从年轻代晋升了一批对象进来,不巧的是,老年代此时已经没有空间再容纳这些对象了。这种场景下,CMS收集器会停止工作,系统进入stop-the-world模式,并且回收算法会退化为单线程复制算法,重新分配整个堆内存的存活对象到SO中,释放所有其他空间。很显然,整个过程会比较漫长。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"3. 解决方案","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"既然是老年代来不及GC导致的问题,那只需要让CMS收集器更早一点回收就可以大概率避免这种情况发生。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"JVM提供了参数XX:CMSInitiatingOccupancyFraction=N来设置CMS回收的时机, N表示当前老年代已使用内存占年轻代总内存的比例, 可以将值改得更小使回收更早进行,比如60","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"另外建议在处理时关注下系统BlockCache是否开启了offheap模式,还有JVM启动参数是否合理,JVM堆内存管理是否未合理使用堆外内存。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"案例二: 系统严重Bug导致Regionserver宕机","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"大字段scan导致RegionServer宕机","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"现象: RegionServer进程退出","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"numberedlist","attrs":{"start":1,"normalizeStart":1},"content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":1,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"宕机原因定位","attrs":{}}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"步骤1:","attrs":{}},{"type":"text","text":" 日志。先检查GC相关,如果没有再继续搜索关键字“abort”,查到可疑日志“java.lang.OutOfMemoryError: Requested array exceeds VM limit\"","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"步骤2: ","attrs":{}},{"type":"text","text":"源码确认。看到带堆栈的FALTAL级别日志,定位到源码或者根据在关键字在网上搜索,确认该异常发生在scan结果数据在回传给客户端时,由于数据量太大导致申请的array大小超过JVM规定的最大值(Interge.Max_Value-2)","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"2. 故障因果分析","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"因为HBase系统自身的bug,在某些场景下scan结果数据太大导致JVM在申请array时抛出OutOfMemoryError,造成RegionServer宕机","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"3. 本质原因分析","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"造成这个问题可以认为是HBase的bug,不应该申请超过JVM规定阈值的array。另一方面,也可以认为是业务方用法不当。","attrs":{}}]},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"表列太宽,并且对scan结果没有做列数据限制,导致一行数据就可能因为包含大量列而超过array阈值","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"KeyValue太大,并且没有对scan的返回做限制,导致返回数据结果大小超过array阈值。","attrs":{}}]}]}],"attrs":{}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"4. 解决方案","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"可以在服务端做限制 hbase.server.scanner.max.result.size 大小也可以在客户端访问的时候对返回结果大小做限制(scan.setMaxResultSize)","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"hbase写入异常","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"案例:HDFS缩容导致部分写入异常","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"现象:","attrs":{}},{"type":"text","text":"业务反馈部分写入请求超时异常。此时HBase在执行HDFS集群多台DataNode退役操作。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"numberedlist","attrs":{"start":null,"normalizeStart":1},"content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":1,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"写入异常原因定位","attrs":{}}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"步骤1: ","attrs":{}},{"type":"text","text":"理论上平滑退役不会造成上层业务感知","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"步骤2:","attrs":{}},{"type":"text","text":" 排查HBase节点集群监控, 发现退役操作期间节点IO负载较高","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/80/80a0c3ec51f605bf345853ddda7f2ed8.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"初步判断写入异常和退服期间IO负载较高有一定关系。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"步骤3:","attrs":{}},{"type":"text","text":"在相关时间点查看RegionServer日志,搜索“Exception”,得到关键的2行:","attrs":{}}]},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":" 2020-04-24 13:03:16,685 WARN [ResponseProcessor for block BP-1655660372-10.x.x.x-1505892014043:blk_1093094354_19353580] hdfs.DFSClient: DFSOutputStream ResponseProcessor exception for block BP-1655660374-10.x.x.x-1505892014043:blk_1093094354_19353580 java.io.IOException: Bad response ERROR for block BP-1655660372-10.x.x.x-1505892014043:blk_1093094354_19353580 from datanode 10.x.x.x:50010 at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:828)\n 2020-04-24 13:03:16,700 INFO [sync.0] wal.FSHLog: Slow sync cost: 13924 ms, current pipelin: [10.x.x.x:50010, 10.x.x.x:50010]","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"HLog执行sync花费时间太长(13924ms), 写入响应阻塞。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"步骤4: ","attrs":{}},{"type":"text","text":"进一步查看了DataNode日志发现刷盘很慢,有异常信息","attrs":{}}]},{"type":"codeblock","attrs":{"lang":null},"content":[{"type":"text","text":" 2020-04-24 13:03:16,686 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: newsrec-hbase10.dg.163.org:50010:DataXceiver error processing WRITE_BLOCK operation src: /10.x.x.x:55637 dst:10.x.x.x:50010","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"2. 写入异常原因分析","attrs":{}}]},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"多台DataNode同时退役,过程中copy大量数据块会导致集群所有节点的带宽和IO压力陡增。","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"节点IO负载很高导致DataNode执行数据块落盘很慢,进而导致HBase中HLog刷盘超时异常,在集群写入压力较大的场景下会引起写入堆积超时","attrs":{}}]}]}],"attrs":{}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"3. 解决方案","attrs":{}}]},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"运维应在业务低峰期执行DataNode退役操作","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"不能同时退役多台DataNode,以免造成短时间IO压力急剧增大,改成依次退役。","attrs":{}}]}]}],"attrs":{}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"hbase运维时问题分析思路","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"生产线问题是系统运维工程师的导师。之所以这样说,是因为对问题的分析可以让我们积累更多的问题定位手段,并且让我们对系统的工作原理有更加深入的理解,甚至接触到很多之前不可能接触到的知识领域。就像去一个 未知的世界探索一个未知的问题,越往里面走,就越能看到别人看不到的世界。所以生产线上的问题发生了,一定要抓住机会,追根溯源。毫不夸张地说,技术人员的核心能力很大部分表现在定位并解决问题的能力上。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"实际上,解决问题只是一个结果。从收到报警看到问题的那一刻到最终解决问题,必然会经历三个阶段: 问题定位,问题分析,问题修复。 问题定位是从问题出发通过一定的技术手段找到触发问题的本质,问题分析是从原理上对整个流程脉络梳理清楚,问题解决依赖于问题分析,根据问题分析的结果对问题进行针对性修复或全局修复。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"numberedlist","attrs":{"start":null,"normalizeStart":1},"content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":1,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"问题定位","attrs":{}}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"定位问题的触发原因是解决问题的关键。问题定位的基本流程如图:","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/9f/9f0a51bb64129a42cdfac9f72c6acca5.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"指标监控分析。很多问题都可以直接在监控界面上直观地找到答案。比如业务反馈在某一时刻后读延迟变得非常高。第一反应是去查看系统IO、CPU或者带宽是不是有任何异常,如果看到IO利用率在对应时间点变得异常高,就基本可以确认读性能下降就是由此导致。虽说IO利用率不是本质原因,但这是问题链上的 重要一环,接下来探究为什么IO利用率在对应时间点异常。","attrs":{}}]}]}],"attrs":{}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"对问题定位有用的监控指标非常多,宏观上看可以分为系统基础指标和业务相关指标两大类。系统基础指标包括系统IO利用率、CPU负载、带宽等;业务相关指标包括RegionServer级别读写TPS、读写平均延迟、请求队列长度/Compaction队列长度、MemStore内存变化、BlockCache命中率等。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"日志分析。对于系统性能问题,监控指标或许可以帮忙,但对于系统异常类型的问题,监控指标可能看不到端倪。这个时候HBase系统相关日志最核心的有RegionServer日志和Master日志,另外GC日志、HDFS相关日志(NameNode日志和DataNode日志)以及Zookeeper日志在特定场景下对分析问题都有帮助。","attrs":{}}]}]}],"attrs":{}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"对于日志分析并不需要将日志从头到尾读一遍,可以直接搜索类似于“Exception”,“ERROR”,甚至“WARN”关键字,再结合时间段对日志进行分析。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"网络求助。经过监控指标分析和日志分析后,运维人员通常都会有收获。也有部分情况下,我们能看到了\"Exception\",但不明白具体含义。此时需要去网络上求助。首先在搜索引擎上根据相关日志查找,大多数情况下都能找到相关的文章说明,因为你遇到的问题很大概率别人也会遇到。如果没有线索,接着去各个专业论坛查找请教,比如stackoverflow、hbase-help.com以及各种HBase相关交流群组。最后,还可以发邮件到社区请教社区技术人员。","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"源码分析。在问题解决之后,建议通过源码对问题进行再次确认","attrs":{}}]}]}],"attrs":{}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"2. 问题分析","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"解决未知问题就像一次探索未知世界的旅程。定位问题的过程就是向未知世界走去,走得越远,看到的东西越多,见识到的世面越大。然而眼花缭乱的景色如果不仔细捋一捊,一旦别人问起那个地方怎么样,必然会无言以对。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"问题分析是问题定位的一个逆过程。从问题的最本质原因出发,结合系统的工作原理,不断推演,最终推演出系统的异常行为。要把这个过程分析的清清楚楚,不仅仅需要监控信息、异常日志,更需要结合系统工作原理进行分析。所以回过头来看,只有把系统的原理理解清楚,才能把问题分析清楚。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"3. 问题修复","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"如果能够把问题的来龙去脉解释清楚,相信就能够轻而易举地给出针对性解决方案。这应该是整个问题探索中最轻松的一个环节。没有解决不了的问题,如果有,那就是没有把问题分析清楚。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"italic","attrs":{}}],"text":"参考: 《HBase 原理与实践》","attrs":{}}]},{"type":"horizontalrule","attrs":{}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"italic","attrs":{}}],"text":"作者:","attrs":{}},{"type":"text","marks":[{"type":"italic","attrs":{}},{"type":"strong","attrs":{}}],"text":"许佳宾","attrs":{}},{"type":"text","marks":[{"type":"italic","attrs":{}}],"text":"|Growing运维实施工程师","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"italic","attrs":{}}],"text":"专注于平台实施、sla管理/工具建设、Devops开发","attrs":{}}]}]}
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章