HBASE啓動失敗
org.apache.hadoop.hbase.ClockOutOfSyncException: org.apache.hadoop.hbase.ClockOutOfSyncException: Server hadoop74,60020,1372320861420 has been rejected; Reported time is too far out of sync with master. Time difference of 143732ms > max allowed of 30000ms
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:525)
at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:95)
at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:79)
at org.apache.hadoop.hbase.regionserver.HRegionServer.reportForDuty(HRegionServer.java:2093)
at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:744)
at java.lang.Thread.run(Thread.java:722)
Caused by: org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hbase.ClockOutOfSyncException: Server hadoop74,60020,1372320861420 has been rejected; Reported time is too far out of sync with master. Time difference of 143732ms > max allowed of 30000ms
原因:時間不一致造成的
解決方案1:
配置:hbase.master.maxclockske
<property>
<name>hbase.master.maxclockskew</name>
<value>200000</value>
<description>Time difference of regionserver from master</description>
</property>
解決方案2:
修改個節點時間
hbase併發問題
2012-06-01 16:05:22,776 INFO org.apache.hadoop.hdfs.DFSClient: Exception in createBlockOutputStream java.net.SocketException: 打開的文件過多
2012-06-01 16:05:22,776 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning block blk_3790131629645188816_18192
2012-06-01 16:13:01,966 WARN org.apache.hadoop.hdfs.DFSClient: DFS Read: java.io.IOException: Could not obtain block: blk_-299035636445663861_7843 file=/hbase/SendReport/83908b7af3d5e3529e61b870a16f02dc/data/17703aa901934b39bd3b2e2d18c671b4.9a84770c805c78d2ff19ceff6fecb972
at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1812)
at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1638)
at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1695)
at java.io.DataInputStream.readBoolean(DataInputStream.java:242)
at org.apache.hadoop.hbase.io.Reference.readFields(Reference.java:116)
at org.apache.hadoop.hbase.io.Reference.read(Reference.java:149)
at org.apache.hadoop.hbase.regionserver.StoreFile.<init>(StoreFile.java:216)
at org.apache.hadoop.hbase.regionserver.Store.loadStoreFiles(Store.java:282)
at org.apache.hadoop.hbase.regionserver.Store.<init>(Store.java:221)
at org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:2510)
at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:449)
at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:3228)
at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:3176)
at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:331)
at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:107)
at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
原因及修改方法:由於 Linux系統最大可打開文件數一般默認的參數值是1024,通過 ulimit -n 65535 可即時修改,但重啓後就無效了。或者有如下修改方式:
有如下三種修改方式:
1.在/etc/rc.local 中增加一行 ulimit -SHn 65535
2.在/etc/profile 中增加一行 ulimit -SHn 65535
3.在/etc/security/limits.conf最後增加如下兩行記錄
* soft nofile 65535
* hard nofile 65535
如果hadoop版本不是hbase默認對於版本,運行phoenix異步索引MR會報錯
2017-07-13 21:18:41,438 INFO [main] mapreduce.Job: Job job_1498447686527_0002 running in uber mode : false
2017-07-13 21:18:41,439 INFO [main] mapreduce.Job: map 0% reduce 0%
2017-07-13 21:18:55,554 INFO [main] mapreduce.Job: map 100% reduce 0%
2017-07-13 21:19:05,642 INFO [main] mapreduce.Job: map 100% reduce 100%
2017-07-13 21:19:06,660 INFO [main] mapreduce.Job: Job job_1498447686527_0002 completed successfully
2017-07-13 21:19:06,739 ERROR [main] index.IndexTool: An exception occurred while performing the indexing job: IllegalArgumentException: No enum constant org.apache.hadoop.mapreduce.JobCounter.MB_MILLIS_REDUCES at:
java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.mapreduce.JobCounter.MB_MILLIS_REDUCES
at java.lang.Enum.valueOf(Enum.java:236)
使用./hbase classpath 查看hbase使用的jar包路徑,我用的habse0.98默認對應hadoop2.2.0,而我的是2.6.4版本,解決辦法是移出hbase/lib下的hadoop*,刪除outpath或者重新指定outpath運行該MR,運行完刷新索引,索引表就可用了。
phoenix查詢count租約過期問題
首先租約過期解決方案,將租約時間增加:
首先租約過期解決方案,將租約時間增加:
劃拉到底,增加時間到3分鐘
將統計count(參數)中的參數用1代替,如select count(1) from ehrmain;
重啓hadoop時namenode啓動失敗
部分日誌:
This node has namespaceId '1770688123 and clusterId 'CID-57ffa982-419d-4c6a-81a1-d59761366cd1' but the requesting node expected '1974358097' and 'CID-57ffa982-419d-4c6a-81a1-d59761366cd1'
2017-04-25 19:20:01,508 WARN org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Edit log tailer interrupted
java.lang.InterruptedException: sleep interrupted
-------做的操作
停掉所有的進程,修改core-site.xml中的ipc參數
<property>
<name>ipc.client.connect.max.retries</name>
<value>100</value>
</property>
<property>
<name>ipc.client.connect.retry.interval</name>
<value>10000</value>
</property>
備份namenode的current,格式化namenode,將備用namenode的current替換成格式化後的namenode的current,修改journal的current中的VERSION的namespaceID和clusterID,使其值與namenode的VERSION對應值一致重啓hadoop
小松鼠連接不上
重啓hbase後數據查不到,小松鼠客戶端連接不上,hbase shell掃描報錯:ERROR: org.apache.hadoop.hbase.exceptions.RegionOpeningException: Region ADVICE,,1484647165178.b827fafe905eda29996dfb56bd9fd61f. is opening on hadoop79,60020,1497005577348
Region EHRMAIN,,1489599097179.61688929e93c5143efbe19b68f995c76. is not online on hadoop79,60020,1497002696781
phoenix報錯:is opening on hadoop78,60020
命令hbase hbck:出現大量異常信息
解決辦法:關掉hbase集羣,在每個RegionServer上的hbase-site.xml配置文件裏面增加如下
<property>
<name>hbase.regionserver.executor.openregion.threads</name>
<value>100</value>
</property>
重啓hbase集羣,命令hbase hbck:Summary:
Table CONSULTATIONRECORD is okay.
Number of regions: 6
Deployed on: hadoop76,60020,1497006965624 hadoop77,60020,1497006965687 hadoop78,60020,1497006965700 hadoop79,60020,1497006965677
Table PT_LABREPORT is okay.所有表都ok。
注:hbase hbck -fix可以修復hbase數據不一致
無法啓動datanode問題
hadoop集羣啓動start-all.sh的時候,slave總是無法啓動datanode,並會報錯:… could only be replicated to 0 nodes, instead of 1 … 就是有節點的標識可能重複。也可能有其他原因,以下解決方法請依次嘗試。
解決方法:
1).刪除所有節點dfs.data.dir和dfs.tmp.dir目錄(默認爲tmp/dfs/data和tmp/dfs/tmp)下的數據文件;然後重新hadoop namenode -format 格式化節點;然後啓動。
2).如果是端口訪問的問題,你應該確保所用的端口都打開,比如hdfs://machine1:9000/、50030、50070之類的。執行#iptables -I INPUT -p tcp –dport 9000 -j ACCEPT 命令。如果還有報錯:hdfs.DFSClient: Exception in createBlockOutputStream java.net.ConnectException: Connection refused;應該是datanode上的端口不能訪問,到datanode上修改iptables:#iptables -I INPUT -s machine1 -p tcp -j ACCEPT
3).還有可能是防火牆的限制集羣間的互相通信。嘗試關閉防火牆。/etc/init.d/iptables stop
4).最後還有可能磁盤空間不夠了,請查看 df -al
權限問題導致的啓動失敗
ERROR namenode.NameNode: java.io.IOException: Cannot create directory /export/home/dfs/name/current
ERROR namenode.NameNode: java.io.IOException: Cannot remove current directory: /usr/local/hadoop/hdfsconf/name/current
原因是 沒有設置 /usr/hadoop/tmp 的權限沒有設置, 將之改爲:
chown –R hadoop:hadoop /usr/hadoop/tmp
sudo chmod -R a+w /usr/local/hadoop
Hadoop格式化HDFS報錯
此報錯一般是配置中定義的相關name和data文件夾沒有創建或者權限不足,也有可能是曾經其他版本使用過的文件夾,解決辦法,如果已有刪除重建,沒有就創建這兩個文件夾,保證是當前用戶創建的,再重新格式化即可。
MR操作hbase
java.lang.ClassNotFoundException: org.apache.hadoop.hbase.protobuf.generated.ClientProtos$Result
解決辦法:hbase少了protobuf的jar包
刪除hbase的jar包,重新導入全部hbase*.jar
---------------------------------------
java.lang.NoClassDefFoundError: org/cloudera/htrace/Trace
導入htrace包
phoenix報錯
WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
查看hadoop/lib目錄,發現用戶是500 500,修改 chown root.root -R *
重啓系統,啓動所有組建../sqlline.py 192.168.204.111 啓動成功
-------------------------------
如若報錯而且很長時間才啓動 Can't get master address from ZooKeeper; znode data == null
--hbase master掛了
phoenix啓動報錯:Error: org.apache.hadoop.hbase.DoNotRetryIOException: Class org.apache.phoenix.coprocessor.MetaDataRegionObserver cannot be loaded Set hbase.table.sanity.checks to false at conf or table descriptor if you want to bypass sanity checks
必須重啓hbase包括Hmaster
phoenix報錯:Error: ERROR 1012 (42M03): Table undefined. tableName=ZLH_PERSON (state=42M03,code=1012)
查看是否存在TABLE_SCHEM,寫上就好了
0: jdbc:phoenix:hadoop74,hadoop75,hadoop76:21> select * from test.zlh_person;
+------+-------+------+
| ID | NAME | AGE |
+------+-------+------+
| 100 | 灝忕孩 | 18 |
+------+-------+------+
hbase-0.98.21/bin/stop-hbase.sh
stopping hbasecat: /tmp/hbase-hadoop-master.pid: No such file or directory
原因是,默認情況下pid文件保存在/tmp目錄下,/tmp目錄下的文件很容易丟失(重啓後基本就會刪除),解決辦法:在hbase-env.sh中修改pid文件的存放路徑
[java]
# The directory where pid files are stored. /tmp by default.
export HBASE_PID_DIR=/var/hadoop/pids
hadoop中
no namenode to stop hadoop關閉找不到pid,修改集羣中hadoop-env.sh和yarn-daemon.sh中的pid選項即可
2016-09-09 15:32:59,380 WARN [main-SendThread(hadoop74:2181)] zookeeper.ClientCnxn: Session 0x0 for server hadoop74/172.18.3.74:2181, unexpected error, closing socket connection and attempting reconnect
java.io.IOException: Connection reset by peer
連不上zookeeper,查看zookeeper的日誌文件有報too many connections from host - max is 10 的錯誤,因爲hbase連接zookeeper的連接數太多,默認zookeeper給每個客戶端IP使用的連接數爲10個,目前每個regionserver有三百個region,stop zookeeper修改zoo.cfg:maxClientCnxns=300,重啓zookeeper
ambari部署集羣ssh主機註冊時不成功,查看日誌Ambari agent machine hostname (localhost.localdomain) does not match expected ambari server hostname (hadoop-node-101). Aborting registration.
發現/etc/hosts文件有問題,hostname之間不是空格 localhost和hostname都放在了最後,修改成正確的hosts文件後解決
正確格式:
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
172.18.*.* hadoop-node-*
..............
ambari的phoenix版本太低不支持分頁問題解決
1、刪除phoenix相關的表
2、hbase shell刪除phoenix四個系統表("SYSTEM.CATALOG", "SYSTEM.FUNCTION", "SYSTEM.SEQUENCE", "SYSTEM.STATS")
3、修改相應的phoenix jar包名替換ambari的phoenix jar
phoenix-4.4.0.2.4.2.0-258-client.jar,phoenix-core-4.4.0.2.4.2.0-258-tests.jar,phoenix-server-client-4.4.0.2.4.2.0-258.jar,phoenix-4.4.0.2.4.2.0-258-thin-client.jar,phoenix-server-4.4.0.2.4.2.0-258.jar,phoenix-core-4.4.0.2.4.2.0-258.jar,phoenix-server-4.4.0.2.4.2.0-258-tests.jar
4、小松鼠加入新jar重建驅動(phoenix-4.4.0.2.4.2.0-258-client.jar,phoenix-server-4.4.0.2.4.2.0-258.jar,phoenix-core-4.4.0.2.4.2.0-258.jar)
5、關閉hbase,修改 Advanced hbase-site下的RegionServer WAL Codec:org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec
6、重啓hbase
java.lang.NoSuchMethodError: org.apache.hadoop.conf.Configuration.addDeprecations([Lorg/apache/hadoop/conf/Configuration$DeprecationDelta;)V
缺少hadoop-mapreduce-client-core-2.2.0.jar包
hbase宕機does not exist or is not under Constructionblk_1075156057_1415896
設置-XX:CMSInitiatingOccupancyFraction=60,提早cms進行老生帶的回收,減少cms的時間