1、HADOOP
1.1 HADOOP安裝
1.1.1、下載、解壓
1.1.2、配置環境變量(/etc/profile或者~/.bashrc),環境變量生效
1.1.3、配置Hadoop文件,並創建對應目錄
- 修改core-site.xml
<property>
<name>fs.default.name</name>
<value>hdfs://spark1:9000</value>
</property>
修改hdfs-site.xml
<property> <name>dfs.name.dir</name> <value>/usr/local/data/namenode</value> </property> <property> <name>dfs.data.dir</name> <value>/usr/local/data/datanode</value> </property> <property> <name>dfs.tmp.dir</name> <value>/usr/local/data/tmp</value> </property> <property> <name>dfs.replication</name> <value>2</value> </property>
- 修改mapred-site.xml
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
- 修改yarn-site.xml
<property>
<name>yarn.resourcemanager.hostname</name>
<value>spark1</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
- 修改slaves文件
spark2
spark3
1.2初始化、啓動
1.2.1、格式化namenode
hdfs namenode -format
1.2.2、 啓動、關閉hdfs
start-dfs.sh
stop-dfs.sh
1.2.3、 驗證
jps驗證
spark1啓動服務:namenode、secondarynamenode
spark2(spark3)啓動服務:datanode- 訪問http://spark1:50070
1.3、啓動、驗證yarn服務
1.3.1、啓動yarn
start-yarn.sh
1.3.2、驗證yarn
jps驗證
sparkproject1:resourcemanager、nodemanager
sparkproject2:nodemanager
sparkproject3:nodemanager
-
該系列筆記: