1.下載安裝包(下載地址可網上查找,還要安裝jdk,網上都有,在這就不列舉了,這裏用的是hadoop 3.1.1版本)
2.配置環境變量,並讓它生效,執行命令 source /etc/profile
JAVA_HOME=/usr/java/java8/jdk1.8.0_191
export HADOOP_HOME=/usr/local/hadoop
export HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop
export HADOOP_COMMON_LIB_NATIVE_DIR=${HADOOP_HOME}/lib/native
export HADOOP_OPTS="-Djava.library.path=${HADOOP_HOME}/lib"
export CLASS_PATH=.:${JAVA_HOME}/lib:${HIVE_HOME}/lib:$CLASS_PATH
export PATH=.:${JAVA_HOME}/bin:${HADOOP_HOME}/bin:${HADOOP_HOME}/sbin:${HIVE_HOME}/bin:$PATH
3.修改core-site.xml和hdfs-site.xml 文件
core-site.xml文件如下
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/opt/hadoop/tmp</value>
</property>
</configuration>
hdfs-site.xml 文件如下
<configuration>
<property>
<name>dfs.name.dir</name>
<value>/opt/hadoop/hdfs/name</value>
<description>namenode</description>
</property>
<property>
<name>dfs.data.dir</name>
<value>/opt/hadoop/hdfs/data</value>
<description>datanode</description>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
以上紅色標明的文件目錄,不需要自己建,hadoop會幫你建立,只需要按你自己需要,配置即可
4.SSH 設置免密碼登錄
ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
chmod 600 ~/.ssh/authorized_keys
5.修改文件 sbin/start-dfs.sh 和sbin/stop-dfs.sh
都添加以下內容
HDFS_DATANODE_USER=root
HADOOP_SECURE_DN_USER=hdfs
HDFS_NAMENODE_USER=root
HDFS_SECONDARYNAMENODE_USER=root
6.修改文件 sbin/start-yarn.sh,sbin/stop-yarn.sh
YARN_RESOURCEMANAGER_USER=root
HADOOP_SECURE_DN_USER=yarn
YARN_NODEMANAGER_USER=root
7.修改 /etc/hadoop/hadoop-env.sh文件
8.初始化腳本
/bin/hdfs namenode -format
9.啓動命令
./sbin/start-dfs.sh
10.檢查是否啓動成功,執行jps
(PS:主要是第六步和第七步,如果沒做會啓動不成功)