完全分佈式安裝,注意事項

1.ssh

完全安裝的時候需要將每臺機器的id_rsa.pub都寫入到一個login.txt裏,然後 cat login.txt  >>  authorized_keys

每臺機器都要有

修改 authorized_keys的權限爲600

chmod 600 authorized_keys

2.hbase

再啓動hbase的時候要先啓動zookeeper

在hbase的配置中zookeeper直接註釋爲false不然默認是true

hdfs的hbase文件的權限最好是755

3.hive

在配置hive之前要配置mysql,mysql的賬號密碼都要與hive的一致

建立hive相應的數據庫

有可能無法啓動metastore

使用如下命令直接啓動

<span style="font-family:Consolas, Monaco, Lucida Console, monospace;"><span style="line-height: 16px; white-space: pre-wrap;">hive --service metastore</span></span>

4遇到的問題及解決方案

報錯:
2016-01-11 16:53:52,986 ERROR [main] regionserver.HRegionServerCommandLine: Region server exiting
java.lang.RuntimeException: Failed construction of Regionserver: class org.apache.hadoop.hbase.regionserver.HRegionServer
        at org.apache.hadoop.hbase.regionserver.HRegionServer.constructRegionServer(HRegionServer.java:2458)
        at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.start(HRegionServerCommandLine.java:64)
        at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.run(HRegionServerCommandLine.java:87)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
        at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
        at org.apache.hadoop.hbase.regionserver.HRegionServer.main(HRegionServer.java:2473)
Caused by: java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.hadoop.hbase.regionserver.HRegionServer.constructRegionServer(HRegionServer.java:2456)
        ... 5 more
Caused by: java.net.BindException: Problem binding to node111/127.0.0.1:16020 : Address already in use2016-01-11 16:53:52,986 ERROR [main] regionserver.HRegionServerCommandLine: Region server exiting
java.lang.RuntimeException: Failed construction of Regionserver: class org.apache.hadoop.hbase.regionserver.HRegionServer
        at org.apache.hadoop.hbase.regionserver.HRegionServer.constructRegionServer(HRegionServer.java:2458)
        at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.start(HRegionServerCommandLine.java:64)
        at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.run(HRegionServerCommandLine.java:87)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
        at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
        at org.apache.hadoop.hbase.regionserver.HRegionServer.main(HRegionServer.java:2473)
Caused by: java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.hadoop.hbase.regionserver.HRegionServer.constructRegionServer(HRegionServer.java:2456)
        ... 5 more
Caused by: java.net.BindException: Problem binding to node111/127.0.0.1:16020 : Address already in use
解決方案
先啓動
bin/start-hbase.sh
再啓動
bin/local-regionservers.sh start 1
bin/local-regionservers.sh start 2
hostname不存在
解決辦法:
hostname -f 先查看下是否存在主機名
hostname newname //設置主機名爲newname


報錯
Can't get master address from ZooKeeper; znode data == null 
                                        ↓
Call From node167/127.0.0.1 to node111:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783)
        at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730)
        ... ...
        at org.apache.hadoop.hbase.util.FSUtils.isInSafeMode(FSUtils.java:432)
        at org.apache.hadoop.hbase.util.FSUtils.waitOnSafeMode(FSUtils.java:879)
        at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:411)
        at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:145)
        at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:125)
        at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:513)
        at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:157)
        at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1332)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
        at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
解決辦法:查看hadoop fs -ls /hbase的權限 給改成775 就可以了
yarn無法啓動
報錯:
16/01/18 10:00:54 INFO client.RMProxy: Connecting to ResourceManager at node111/172.24.2.167:8032
16/01/18 10:00:56 INFO ipc.Client: Retrying connect to server: node111/ip:8032. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
16/01/18 10:00:57 INFO ipc.Client: Retrying connect to server: node111/ip:8032. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
16/01/18 10:00:58 INFO ipc.Client: Retrying connect to server: node111/ip:8032. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
16/01/18 10:00:59 INFO ipc.Client: Retrying connect to server: node111/ip:8032. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
16/01/18 10:01:00 INFO ipc.Client: Retrying connect to server: node111/ip:8032. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
16/01/18 10:01:01 INFO ipc.Client: Retrying connect to server: node111/ip:8032. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
解決方案:
將$hadoop/dir的目錄以及目錄下的dfs/name和dfs/date清空
bin/hadoop namenode -format  格式化namenode

然後重啓即可

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章