轉載http://blog.csdn.net/bychjzh/article/details/7830508
剛剛裝好Pig,打算打開hadoop模式嘗試下Pig,但是發現Pig報錯
[root@master pig-0.9.1]# pig
2011-12-03 07:27:30,158 [main] INFO org.apache.pig.Main - Logging error messages to: /home/bell/software/hadoop-0.20.2/pig-0.9.1/pig_1322868450154.log
2011-12-03 07:27:30,403 [main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to hadoop
file system at: hdfs://localhost/
2011-12-03 07:27:31,544 [main] INFO org.apache.hadoop.ipc.Client - Retrying connect to server: localhost/127.0.0.1:8020.
Already tried 0 time(s).
2011-12-03 07:27:32,545 [main] INFO org.apache.hadoop.ipc.Client - Retrying connect to server: localhost/127.0.0.1:8020.
Already tried 1 time(s).
2011-12-03 07:27:33,546 [main] INFO org.apache.hadoop.ipc.Client - Retrying connect to server: localhost/127.0.0.1:8020.
Already tried 2 time(s).
2011-12-03 07:27:34,547 [main] INFO org.apache.hadoop.ipc.Client - Retrying connect to server: localhost/127.0.0.1:8020.
Already tried 3 time(s).
2011-12-03 07:27:35,548 [main] INFO org.apache.hadoop.ipc.Client - Retrying connect to server: localhost/127.0.0.1:8020.
Already tried 4 time(s).
2011-12-03 07:27:36,549 [main] INFO org.apache.hadoop.ipc.Client - Retrying connect to server: localhost/127.0.0.1:8020.
Already tried 5 time(s).
2011-12-03 07:27:37,550 [main] INFO org.apache.hadoop.ipc.Client - Retrying connect to server: localhost/127.0.0.1:8020.
Already tried 6 time(s).
2011-12-03 07:27:38,550 [main] INFO org.apache.hadoop.ipc.Client - Retrying connect to server: localhost/127.0.0.1:8020.
Already tried 7 time(s).
2011-12-03 07:27:39,551 [main] INFO org.apache.hadoop.ipc.Client - Retrying connect to server: localhost/127.0.0.1:8020.
Already tried 8 time(s).
2011-12-03 07:27:40,552 [main] INFO org.apache.hadoop.ipc.Client - Retrying connect to server: localhost/127.0.0.1:8020.
Already tried 9 time(s).
2011-12-03 07:27:40,582 [main] ERROR org.apache.pig.Main - ERROR 2999: Unexpected internal error. Failed to create DataStorage
Details at logfile: /home/bell/software/hadoop-0.20.2/pig-0.9.1/pig_1322868450154.log
發現錯誤後查看了下log,發現是connection refused,那麼可能是namenode的問題,JPS查看進程,發現沒有namenode進程,搜索網絡發現如下解決方案:
最近遇到了一個問題,執行start-all.sh的時候發現JPS一下namenode沒有啓動
每次開機都得重新格式化一下namenode纔可以其實問題就出在tmp文件,默認的tmp文件每次重新開機會被清空,與此同時namenode的格式化信息就會丟失
於是我們得重新配置一個tmp文件目錄
首先在home目錄下建立一個hadoop_tmp目錄
sudo mkdir ~/hadoop_tmp
然後修改hadoop/conf目錄裏面的core-site.xml文件,加入以下節點:
<property>
<name>hadoop.tmp.dir</name>
<value>/home/chjzh/hadoop_tmp</value>
<description>A base for other temporary directories.</description>
</property>
注意:我的用戶是chjzh所以目錄是/home/chjzh/hadoop_tmp
OK了,重新格式化Namenode
hadoop namenode -format
然後啓動hadoop
start-all.sh
執行下JPS命令就可以看到NameNode了