namenode異常退出分析

namenode異常退出日誌:

2017-09-14 02:38:07,147 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services started for active state
2017-09-14 02:38:07,150 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Ending log segment 304457386
2017-09-14 02:38:07,150 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: NameNodeEditLogRoller was interrupted, exiting
2017-09-14 02:38:08,125 WARN org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Remote journal 10.117.210.216:8485 failed to write txns 304457637-304457638. Will try to write to this JN again after the next log roll.
org.apache.hadoop.ipc.RemoteException(java.io.IOException): IPC's epoch 77 is less than the last promised epoch 78
        at org.apache.hadoop.hdfs.qjournal.server.Journal.checkRequest(Journal.java:414)
        at org.apache.hadoop.hdfs.qjournal.server.Journal.checkWriteRequest(Journal.java:442)
        at org.apache.hadoop.hdfs.qjournal.server.Journal.journal(Journal.java:342)
        at org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.journal(JournalNodeRpcServer.java:148)
        at org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.journal(QJournalProtocolServerSideTranslatorPB.java:158)
        at org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:25421)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)

        at org.apache.hadoop.ipc.Client.call(Client.java:1411)
        at org.apache.hadoop.ipc.Client.call(Client.java:1364)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
        at com.sun.proxy.$Proxy14.journal(Unknown Source)
        at org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolTranslatorPB.journal(QJournalProtocolTranslatorPB.java:167)
        at org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel$7.call(IPCLoggerChannel.java:385)
        at org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel$7.call(IPCLoggerChannel.java:378)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
.........
2017-09-14 02:38:33,152 WARN org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Aborting QuorumOutputStream starting at txid 304457386
2017-09-14 02:38:33,156 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2017-09-14 02:38:33,157 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: 
主 zkfailovercontroller在異常時間內日誌:
2017-09-14 02:37:55,678 INFO org.apache.hadoop.ha.ActiveStandbyElector: Session connected.
2017-09-14 02:37:55,681 INFO org.apache.hadoop.ha.ActiveStandbyElector: Checking for any old active which needs to be fenced...
2017-09-14 02:37:55,692 INFO org.apache.hadoop.ha.ActiveStandbyElector: Old node exists: 0a0667696f6e6565120b6e616d656e6f64653137351a0e746573742d737370732d732d303220d63e28d33e
2017-09-14 02:37:55,692 INFO org.apache.hadoop.ha.ActiveStandbyElector: But old node has our own data, so don't need to fence it.
2017-09-14 02:37:55,692 INFO org.apache.hadoop.ha.ActiveStandbyElector: Writing znode /hadoop-ha/gionee/ActiveBreadCrumb to indicate that the local node is the most recent active...
2017-09-14 02:37:59,024 INFO org.apache.zookeeper.ClientCnxn: Client session timed out, have not heard from server in 3334ms for sessionid 0x35e67efe4bc127c, closing socket connection and attempting reconnect
2017-09-14 02:37:59,783 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server test-ssps-s-02/10.117.68.10:2181. Will not attempt to authenticate using SASL (unknown error)
2017-09-14 02:37:59,783 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to test-ssps-s-02/10.117.68.10:2181, initiating session
2017-09-14 02:37:59,784 INFO org.apache.zookeeper.ClientCnxn: Session establishment complete on server test-ssps-s-02/10.117.68.10:2181, sessionid = 0x35e67efe4bc127c, negotiated timeout = 5000
2017-09-14 02:37:59,785 WARN org.apache.hadoop.ha.ActiveStandbyElector: Exception handling the winning of election
org.apache.zookeeper.KeeperException$BadVersionException: KeeperErrorCode = BadVersion for /hadoop-ha/gionee/ActiveBreadCrumb
        at org.apache.zookeeper.KeeperException.create(KeeperException.java:115)
        at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
        at org.apache.zookeeper.ZooKeeper.setData(ZooKeeper.java:1266)
        at org.apache.hadoop.ha.ActiveStandbyElector$5.run(ActiveStandbyElector.java:959)
        at org.apache.hadoop.ha.ActiveStandbyElector$5.run(ActiveStandbyElector.java:956)
        at org.apache.hadoop.ha.ActiveStandbyElector.zkDoWithRetries(ActiveStandbyElector.java:980)
        at org.apache.hadoop.ha.ActiveStandbyElector.setDataWithRetries(ActiveStandbyElector.java:956)
        at org.apache.hadoop.ha.ActiveStandbyElector.writeBreadCrumbNode(ActiveStandbyElector.java:831)
        at org.apache.hadoop.ha.ActiveStandbyElector.becomeActive(ActiveStandbyElector.java:801)
        at org.apache.hadoop.ha.ActiveStandbyElector.processResult(ActiveStandbyElector.java:480)
        at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:546)
        at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)

從 zkfailovercontroller在異常時間內日誌:

2017-09-14 02:37:50,963 INFO org.apache.hadoop.ha.ActiveStandbyElector: Session disconnected. Entering neutral mode...
2017-09-14 02:37:51,192 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to test-ssps-s-03/10.51.20.155:2181, initiating session
2017-09-14 02:37:51,209 INFO org.apache.hadoop.ha.ActiveStandbyElector: Session connected.
2017-09-14 02:37:54,645 INFO org.apache.hadoop.ha.ActiveStandbyElector: Session disconnected. Entering neutral mode...
2017-09-14 02:37:55,320 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to test-ssps-s-04/10.117.210.216:2181, initiating session
2017-09-14 02:37:55,334 INFO org.apache.hadoop.ha.ActiveStandbyElector: Session connected.
2017-09-14 02:37:58,787 INFO org.apache.hadoop.ha.ActiveStandbyElector: Session disconnected. Entering neutral mode...
2017-09-14 02:37:59,110 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to test-ssps-s-02/10.117.68.10:2181, initiating session
2017-09-14 02:37:59,113 INFO org.apache.hadoop.ha.ActiveStandbyElector: Session connected.
2017-09-14 02:37:59,778 INFO org.apache.hadoop.ha.ActiveStandbyElector: Checking for any old active which needs to be fenced...
2017-09-14 02:37:59,782 INFO org.apache.hadoop.ha.ZKFailoverController: Should fence: NameNode at test-ssps-s-02/10.117.68.10:8022
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783)
        at com.sun.proxy.$Proxy13.transitionToStandby(Unknown Source)
        at org.apache.hadoop.ha.protocolPB.HAServiceProtocolClientSideTranslatorPB.transitionToStandby(HAServiceProtocolClientSideTranslatorPB.java:112)
        at org.apache.hadoop.ha.FailoverController.tryGracefulFence(FailoverController.java:172)
        at org.apache.hadoop.ha.ZKFailoverController.doFence(ZKFailoverController.java:509)
        at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
        at java.io.FilterInputStream.read(FilterInputStream.java:133)
        at java.io.FilterInputStream.read(FilterInputStream.java:133)
        at org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:512)
        at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
        at java.io.BufferedInputStream.read(BufferedInputStream.java:254)
        at java.io.DataInputStream.readInt(DataInputStream.java:387)
        at org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1055)
        at org.apache.hadoop.ipc.Client$Connection.run(Client.java:950)
2017-09-14 02:38:04,796 INFO org.apache.hadoop.ha.NodeFencer: ====== Beginning Service Fencing Process... ======
2017-09-14 02:38:04,796 INFO org.apache.hadoop.ha.NodeFencer: Trying method 1/1: org.apache.hadoop.ha.ShellCommandFencer(true)
2017-09-14 02:38:04,837 INFO org.apache.hadoop.ha.ShellCommandFencer: Launched fencing command 'true' with pid 111814
2017-09-14 02:38:04,848 INFO org.apache.hadoop.ha.NodeFencer: ====== Fencing successful by method org.apache.hadoop.ha.ShellCommandFencer(true) ======
2017-09-14 02:38:04,848 INFO org.apache.hadoop.ha.ActiveStandbyElector: Writing znode /hadoop-ha/gionee/ActiveBreadCrumb to indicate that the local node is the most recent active...
2017-09-14 02:38:04,851 INFO org.apache.hadoop.ha.ZKFailoverController: Trying to make NameNode at test-ssps-s-03/10.51.20.155:8022 active...
2017-09-14 02:38:35,832 INFO org.apache.hadoop.ha.ZKFailoverController: Successfully transitioned NameNode at test-ssps-s-03/10.51.20.155:8022 to active state

主從zkfailovercontroller顯示:zk客戶端與服務端連接超時,默認超時時間是5000ms(由參數ha.zookeeper.session-timeout.ms決定),而zk服務端在規定響應時間(3334ms,至於爲什麼是3334ms, see http://blog.csdn.net/xjping0794/article/details/77981247)內未響應,故重新建立連接。在02:37:59分時,主session再一次超時,這時,從zkfailover獲得了主從選舉權,開啓fencing操作,首先嚐試將之前的old active namenode transitionToStandby,失敗後,採用強制隔離操作,通過配置的
org.apache.hadoop.ha.ShellCommandFencer(true)
將之前的active NN 殺掉,故出現了namenode異常退出情況。

至於原先的active nn報如下異常

IPC's epoch 77 is less than the last promised epoch 78
說明集羣出現短期的雙 ACTIVE NN。

出現雙ACTIVE NN更深層次原因:

1、ZK服務響應超時後,主動釋放zk中的ActiveStandbyElectorLock,但此時NN1還是active。

2)ZKFC2在zk中競爭到ActiveStandbyElectorLock,將NN2(原來的Standby)變成Active,同時會更新JN中的epoch使其+1。
3)NN1(原先的Active)再次去操作JN的editlog時發現自己的epoch比JN的epoch小1,促使自己重啓,成爲Standby NameNode。


解決辦法:

在core-site.xml文件中修改ha.zookeeper.session-timeout.ms參數值,增加ZK服務的響應時間。
<property>
    <name>ha.zookeeper.session-timeout.ms</name>
    <value>20000</value>
</property>




發佈了67 篇原創文章 · 獲贊 16 · 訪問量 15萬+
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章