[Hive]那些年踩過的Hive坑

[Hive]那些年踩過的Hive坑 

標籤: hive錯誤
 2754人閱讀 評論(0) 收藏 舉報
 分類:

目錄(?)[+]

1. 缺少MySQL驅動包
1.1 問題描述
  1. Caused by: org.datanucleus.store.rdbms.connectionpool.DatastoreDriverNotFoundException: The specified datastore driver ("com.mysql.jdbc.Driver") was not found in the CLASSPATH. Please check your CLASSPATH specification, and the name of the driver.
  2. at org.datanucleus.store.rdbms.connectionpool.AbstractConnectionPoolFactory.loadDriver(AbstractConnectionPoolFactory.java:58)
  3. at org.datanucleus.store.rdbms.connectionpool.BoneCPConnectionPoolFactory.createConnectionPool(BoneCPConnectionPoolFactory.java:54)
  4. at org.datanucleus.store.rdbms.ConnectionFactoryImpl.generateDataSources(ConnectionFactoryImpl.java:213)
1.2. 解決方案

上述問題很可能是缺少MySQL的jar包,下載mysql-connector-Java-5.1.32.tar.gz,複製到hive的lib目錄下:

  1. xiaosi@yoona:~$ cp mysql-connector-java-5.1.34-bin.jar opt/hive-2.1.0/lib/


2. 元數據庫mysql初始化
2.1 問題描述

運行./hive腳本時,無法進入,報錯:

  1. Exception in thread "main" java.lang.RuntimeException: Hive metastore database is not initialized. Please use schematool (e.g. ./schematool -initSchema -dbType ...) to create the schema. If needed, don't forget to include the option to auto-create the underlying database in your JDBC connection string (e.g. ?createDatabaseIfNotExist=true for mysql)
2.2 解決方案

在scripts目錄下運行  schematool -initSchema -dbType mysql命令進行Hive元數據庫的初始化:

  1. xiaosi@yoona:~/opt/hive-2.1.0/scripts$  schematool -initSchema -dbType mysql
  2. SLF4J: Class path contains multiple SLF4J bindings.
  3. SLF4J: Found binding in [jar:file:/home/xiaosi/opt/hive-2.1.0/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
  4. SLF4J: Found binding in [jar:file:/home/xiaosi/opt/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
  5. SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
  6. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
  7. Metastore connection URL: jdbc:mysql://localhost:3306/hive_meta?createDatabaseIfNotExist=true
  8. Metastore Connection Driver : com.mysql.jdbc.Driver
  9. Metastore connection User: root
  10. Starting metastore schema initialization to 2.1.0
  11. Initialization script hive-schema-2.1.0.mysql.sql
  12. Initialization script completed
  13. schemaTool completed

3. Relative path in absolute URI
3.1 問題描述
  1. Exception in thread "main" java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative path in absolute URI: ${system:java.io.tmpdir%7D/$%7Bsystem:user.name%7D
  2. ...
  3. Caused by: java.net.URISyntaxException: Relative path in absolute URI: ${system:java.io.tmpdir%7D/$%7Bsystem:user.name%7D
  4. at java.net.URI.checkPath(URI.java:1823)
  5. at java.net.URI.<init>(URI.java:745)
  6. at org.apache.hadoop.fs.Path.initialize(Path.java:202)
  7. ... 12 more


3.2 解決方案

 產生上述問題的原因是使用了沒有配置的變量,解決此問題只需在配置文件hive-site.xml中配置system:user.name和system:java.io.tmpdir兩個變量,配置文件中就可以使用這兩個變量:

  1. <property>
  2.    <name>system:user.name</name>
  3.    <value>xiaosi</value>
  4. </property>
  5. <property>
  6.    <name>system:java.io.tmpdir</name>
  7.    <value>/home/${system:user.name}/tmp/hive/</value>
  8. </property>


4. 拒絕連接
4.1 問題描述
  1. on exception: java.net.ConnectException: 拒絕連接; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
  2. ...
  3. Caused by: java.net.ConnectException: Call From Qunar/127.0.0.1 to localhost:9000 failed on connection exception: java.net.ConnectException: 拒絕連接; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
  4. ...
  5. Caused by: java.net.ConnectException: 拒絕連接
  6. at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
  7. at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
  8. at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
  9. at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
  10. at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
  11. at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:614)
  12. at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:712)
  13. at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:375)
  14. at org.apache.hadoop.ipc.Client.getConnection(Client.java:1528)
  15. at org.apache.hadoop.ipc.Client.call(Client.java:1451)
  16. ... 29 more
4.2 解決方案

有可能是Hadoop沒有啓動,使用jps查看一下當前進程發現:

  1. xiaosi@yoona:~/opt/hive-2.1.0$ jps
  2. 7317 Jps

可以看見,我們確實沒有啓動Hadoop。開啓Hadoop的NameNode和DataNode守護進程

  1. xiaosi@yoona:~/opt/hadoop-2.7.3$ ./sbin/start-dfs.sh
  2. Starting namenodes on [localhost]
  3. localhost: starting namenode, logging to /home/xiaosi/opt/hadoop-2.7.3/logs/hadoop-xiaosi-namenode-yoona.out
  4. localhost: starting datanode, logging to /home/xiaosi/opt/hadoop-2.7.3/logs/hadoop-xiaosi-datanode-yoona.out
  5. Starting secondary namenodes [0.0.0.0]
  6. 0.0.0.0: starting secondarynamenode, logging to /home/xiaosi/opt/hadoop-2.7.3/logs/hadoop-xiaosi-secondarynamenode-yoona.out
  7. xiaosi@yoona:~/opt/hadoop-2.7.3$ jps
  8. 8055 Jps
  9. 7561 NameNode
  10. 7929 SecondaryNameNode
  11. 7724 DataNode

5. 創建Hive表失敗
5.1 問題描述
  1. FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:For direct MetaStore DB connections, we don't support retries at the client level.)
5.2 解決方案

查看Hive日誌,看到這樣的錯誤日誌:

  1. NestedThrowablesStackTrace:
  2. Could not create "increment"/"table" value-generation container `SEQUENCE_TABLE` since autoCreate flags do not allow it.
  3. org.datanucleus.exceptions.NucleusUserException: Could not create "increment"/"table" value-generation container `SEQUENCE_TABLE` since autoCreate flags do not allow it.

出現上述問題主要因爲mysql的bin-log format默認爲statement ,在mysql中通過 show variables like 'binlog_format'; 語句查看bin-log format的配置值

  1. mysql> show variables like 'binlog_format';
  2. +---------------+-----------+
  3. | Variable_name | Value     |
  4. +---------------+-----------+
  5. | binlog_format | STATEMENT |
  6. +---------------+-----------+
  7. 1 row in set (0.00 sec)

修改bin-log format的默認值,在mysql的配置文件/etc/mysql/mysql.conf.d/mysqld.cnf中添加 binlog_format="MIXED" ,重啓mysql,再啓動 hive即可。

mac 默認安裝的mysql 路徑:

/usr/local/mysql  修改my.cnf

  1. mysql> show variables like 'binlog_format';
  2. +---------------+-------+
  3. | Variable_name | Value |
  4. +---------------+-------+
  5. | binlog_format | MIXED |
  6. +---------------+-------+
  7. 1 row in set (0.00 sec)

再次執行創表語句:

  1. hive> create table  if not exists employees(
  2.    >    name string comment '姓名',
  3.    >    salary float comment '工資',
  4.    >    subordinates array<string> comment '下屬',
  5.    >    deductions map<string,float> comment '扣除金額',
  6.    >    address struct<city:string,province:string> comment '家庭住址'
  7.    > )
  8.    > comment '員工信息表'
  9.    > ROW FORMAT DELIMITED
  10.    > FIELDS TERMINATED BY '\t'
  11.    > LINES TERMINATED BY  '\n'
  12.    > STORED AS TEXTFILE;
  13. OK
  14. Time taken: 0.664 seconds
6. 加載數據失敗
6.1 問題描述
  1. hive> load data local inpath '/home/xiaosi/hive/input/result.txt' overwrite into table recent_attention;
  2. Loading data to table test_db.recent_attention
  3. Failed with exception Unable to move source file:/home/xiaosi/hive/input/result.txt to destination hdfs://localhost:9000/user/hive/warehouse/test_db.db/recent_attention/result.txt
  4. FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MoveTask

查看Hive日誌,看到這樣的錯誤日誌:

  1. Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /home/xiaosi/hive/warehouse/recent_attention/result.txt could only be replicated to 0 nodes instead of minReplication (=1).  There are 0 datanode(s) running and no node(s) are excluded in this operation.

看到 0 datanodes running 我們猜想可能datanode掛掉了,jps驗證一下,果然我們的datanode沒有啓動起來。

6.2 問題解決

這個問題是由於datanode沒有啓動導致的,至於datanode爲什麼沒有啓動起來,去看另一篇博文:那些年踩過的Hadoop坑(http://blog.csdn.net/sunnyyoona/article/details/51659080)


7. Java連接Hive 驅動失敗
7.1 問題描述
  1. java.lang.ClassNotFoundException: org.apache.hadoop.hive.jdbc.HiveDriver
  2. at java.net.URLClassLoader.findClass(URLClassLoader.java:381) ~[na:1.8.0_91]
  3. at java.lang.ClassLoader.loadClass(ClassLoader.java:424) ~[na:1.8.0_91]
  4. at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331) ~[na:1.8.0_91]
  5. at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ~[na:1.8.0_91]
  6. at java.lang.Class.forName0(Native Method) ~[na:1.8.0_91]
  7. at java.lang.Class.forName(Class.java:264) ~[na:1.8.0_91]
  8. at com.sjf.open.hive.HiveClient.getConn(HiveClient.java:29) [classes/:na]
  9. at com.sjf.open.hive.HiveClient.run(HiveClient.java:53) [classes/:na]
  10. at com.sjf.open.hive.HiveClient.main(HiveClient.java:77) [classes/:na]
  11. at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_91]
  12. at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_91]
  13. at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_91]
  14. at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_91]
  15. at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144) [idea_rt.jar:na]
7.2 解決方案
  1. private static String driverName = "org.apache.hadoop.hive.jdbc.HiveDriver";

取代

  1. private static String driverName = "org.apache.hive.jdbc.HiveDriver"
8. create table問題
8.1 問題描述

  1. create table if not exists employee(
  2.   name string comment 'employee name',
  3.   salary float comment 'employee salary',
  4.   subordinates array<string> comment 'names of subordinates',
  5.   deductions map<string,float> comment 'keys are deductions values are percentages',
  6.   address struct<street:string, city:string, state:string, zip:int> comment 'home address'
  7. )
  8. comment 'description of the table'
  9. tblproperties ('creator'='yoona','date'='20160719')
  10. location '/user/hive/warehouse/test.db/employee';

錯誤信息:

  1. FAILED: ParseException line 10:0 missing EOF at 'location' near ')'
8.2 解決方案

Location放在TBPROPERTIES之前:

  1. create table if not exists employee(
  2.   name string comment 'employee name',
  3.   salary float comment 'employee salary',
  4.   subordinates array<string> comment 'names of subordinates',
  5.   deductions map<string,float> comment 'keys are deductions values are percentages',
  6.   address struct<street:string, city:string, state:string, zip:int> comment 'home address'
  7. )
  8. comment 'description of the table'
  9. location '/user/hive/warehouse/test.db/employee'
  10. tblproperties ('creator'='yoona','date'='20160719');

create table命令:https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-CreateTable

9. JDBC Hive 拒絕連接
9.1 問題描述
  1. 15:00:50.815 [main] INFO  org.apache.hive.jdbc.Utils - Supplied authorities: localhost:10000
  2. 15:00:50.832 [main] INFO  org.apache.hive.jdbc.Utils - Resolved authority: localhost:10000
  3. 15:00:51.010 [main] DEBUG o.a.thrift.transport.TSaslTransport - opening transport org.apache.thrift.transport.TSaslClientTransport@3ffc5af1
  4. 15:00:51.019 [main] WARN  org.apache.hive.jdbc.HiveConnection - Failed to connect to localhost:10000
  5. 15:00:51.027 [main] ERROR com.sjf.open.hive.HiveClient - Connection error!
  6. java.sql.SQLException: Could not open client transport with JDBC Uri: jdbc:hive2://localhost:10000/default: java.net.ConnectException: 拒絕連接
  7. at org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:219) ~[hive-jdbc-2.1.0.jar:2.1.0]
  8. at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:157) ~[hive-jdbc-2.1.0.jar:2.1.0]
  9. at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:107) ~[hive-jdbc-2.1.0.jar:2.1.0]
  10. at java.sql.DriverManager.getConnection(DriverManager.java:664) ~[na:1.8.0_91]
  11. at java.sql.DriverManager.getConnection(DriverManager.java:247) ~[na:1.8.0_91]
  12. at com.sjf.open.hive.HiveClient.getConn(HiveClient.java:29) [classes/:na]
  13. at com.sjf.open.hive.HiveClient.run(HiveClient.java:52) [classes/:na]
  14. at com.sjf.open.hive.HiveClient.main(HiveClient.java:76) [classes/:na]
  15. at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_91]
  16. at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_91]
  17. at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_91]
  18. at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_91]
  19. at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144) [idea_rt.jar:na]
  20. Caused by: org.apache.thrift.transport.TTransportException: java.net.ConnectException: 拒絕連接
  21. at org.apache.thrift.transport.TSocket.open(TSocket.java:226) ~[libthrift-0.9.3.jar:0.9.3]
  22. at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:266) ~[libthrift-0.9.3.jar:0.9.3]
  23. at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37) ~[libthrift-0.9.3.jar:0.9.3]
  24. at org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:195) ~[hive-jdbc-2.1.0.jar:2.1.0]
  25. ... 12 common frames omitted
  26. Caused by: java.net.ConnectException: 拒絕連接
  27. at java.net.PlainSocketImpl.socketConnect(Native Method) ~[na:1.8.0_91]
  28. at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) ~[na:1.8.0_91]
  29. at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206) ~[na:1.8.0_91]
  30. at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) ~[na:1.8.0_91]
  31. at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) ~[na:1.8.0_91]
  32. at java.net.Socket.connect(Socket.java:589) ~[na:1.8.0_91]
  33. at org.apache.thrift.transport.TSocket.open(TSocket.java:221) ~[libthrift-0.9.3.jar:0.9.3]
  34. ... 15 common frames omitted
9.2 解決方案

(1)檢查hive server2是否啓動:

  1. xiaosi@Qunar:/opt/apache-hive-2.0.0-bin/bin$ sudo netstat -anp | grep 10000

如果沒有啓動hive server2,首先啓動服務:

  1. xiaosi@Qunar:/opt/apache-hive-2.0.0-bin/conf$ hive --service hiveserver2 >/dev/null 2>/dev/null &
  2. [1] 11978

(2)檢查配置:

  1. <property>
  2.    <name>hive.server2.thrift.port</name>
  3.    <value>10000</value>
  4.    <description>Port number of HiveServer2 Thrift interface when hive.server2.transport.mode is 'binary'.</description>
  5. </property>

10. User root is not allowed to impersonate anonymous
10.1 問題描述
  1. Failed to open new session: java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): User:xiaosiis not allowed to impersonate anonymous
10.2 解決方案

修改hadoop 配置文件 etc/hadoop/core-site.xml,加入如下配置項

  1. <property>
  2.    <name>hadoop.proxyuser.root.hosts</name>
  3.    <value>*</value>
  4. </property>
  5. <property>
  6.    <name>hadoop.proxyuser.root.groups</name>
  7.    <value>*</value>
  8. </property>
備註:

hadoop.proxyuser.XXX.hosts  與 hadoop.proxyuser.XXX.groups 中XXX爲異常信息中User:* 中的用戶名部分

  1. <property>
  2.       <name>hadoop.proxyuser.xiaosi.hosts</name>
  3.       <value>*</value>
  4.       <description>The superuser can connect only from host1 and host2 to impersonate a user</description>
  5.    </property>
  6.    <property>
  7.       <name>hadoop.proxyuser.xiaosi.groups</name>
  8.       <value>*</value>
  9.       <description>Allow the superuser oozie to impersonate any members of the group group1 and group2</description>
  10.    </property>


11. 安全模式
11.1 問題描述
  1. Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot create directory /tmp/hive/xiaosi/c2f6130d-3207-4360-8734-dba0462bd76c. Name node is in safe mode.
  2. The reported blocks 22 has reached the threshold 0.9990 of total blocks 22. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 5 seconds.
  3. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1327)
  4. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3893)
  5. at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:983)
  6. at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)
  7. at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
  8. at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
  9. at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
  10. at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
  11. at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
  12. at java.security.AccessController.doPrivileged(Native Method)
  13. at javax.security.auth.Subject.doAs(Subject.java:415)
  14. at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
  15. at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)
  16. at org.apache.hadoop.ipc.Client.call(Client.java:1475)
  17. at org.apache.hadoop.ipc.Client.call(Client.java:1412)
  18. at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
  19. at com.sun.proxy.$Proxy32.mkdirs(Unknown Source)
  20. at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:558)
  21. at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  22. at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
  23. at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  24. at java.lang.reflect.Method.invoke(Method.java:606)
  25. at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
  26. at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
  27. at com.sun.proxy.$Proxy33.mkdirs(Unknown Source)
  28. at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3000)
  29. at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2970)
  30. at org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1047)
  31. at org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1043)
  32. at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
  33. at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1043)
  34. at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1036)
  35. at org.apache.hadoop.hive.ql.session.SessionState.createPath(SessionState.java:682)
  36. at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:617)
  37. at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:526)
  38. ... 9 more
11.2 問題分析

hdfs在啓動開始時會進入安全模式,這時文件系統中的內容不允許修改也不允許刪除,直到安全模式結束。安全模式主要是爲了系統啓動的時候檢查各個DataNode上數據塊的有效性,同時根據策略必要的複製或者刪除部分數據塊。運行期通過命令也可以進入安全模式。在實踐過程中,系統啓動的時候去修改和刪除文件也會有安全模式不允許修改的出錯提示,只需要等待一會兒即可。

11.3 問題解決

可以等待其自動退出安全模式,也可以使用手動命令來離開安全模式:

  1. xiaosi@yoona:~$ hdfs dfsadmin -safemode leave
  2. Safe mode is OFF
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章