1.進入hadoop壓縮文件所在文件夾,我的是/usr/local,對壓縮文件進行解壓,tar -zxvf hadoop-1.1.2.......
2.由於解壓後的名字較長,重命名爲hadoop.
3.在/etc/profile下設置環境變量,加入export Hadoop_HOME=/usr/local/hadoop,並把$HADOOP_HOME/bin加入到PATH中.
4.執行source /etc/profile,使配置生效.
5.修改$HADOOP_HOME/conf目錄下的4個配置文件以適應僞分佈式安裝,配置文件分別爲hadoop-env.sh,core-site.xml, hdfs-site.xml, mapred-site.xml.(爲了簡單可以在winSCP中修改).
6.hadoop-env.sh修改方法:將第九行#export JAVA_HOME=/usr/lib/j2sdk1.5-sun改爲export JAVA_HOME=/usr/local/jdk(jdk安裝目錄).
7.core-site.xml的修改內容:
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://hadoop0:9000</value>
<description>change your own hostname</description>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/local/hadoop/tmp</value>
</property>
</configuration>
8.hdfs-site.xml的修改內容:
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
</configuration>
9.mapred-site.xml修改內容:
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>hadoop0:9001</value>
<description>change your own hostname</description>
</property>
</configuration>
10.對hadoop進行格式化,執行hadoop namenode -format.
11.啓動Hadoop,執行start-all.sh,運行後可通過jps查看運行進程(namenode,datanode,secondarynamenode,jobtracker,tasttracker),看看是否安裝成功.
12.或通過瀏覽器查看是否安裝成功,在瀏覽器中輸入hadoop:50070,看看是否出現namenode界面,或50030看mapreduce界面.
13.去除啓動錯誤,查看start-all.sh文件,在/etc/profie中加入exprot HADOOP_HOME_WARN_SUPPRESS=1.修改後執行source.