環境準備
- 安裝好java環境
- 關閉防火牆
- 安裝ssh
執行下述命令,生成密鑰
ssh-keygen -t rsa
一直回車執行,直到生成完畢
Your identification has been saved in /Users/dengwenjing/.ssh/id_rsa.
Your public key has been saved in /Users/dengwenjing/.ssh/id_rsa.pub.
執行下述命令
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
chmod og-wx ~/.ssh/authorized_keys
- 給所有用戶遠程登錄權限
- 嘗試登陸localhost
ssh localhost
安裝hadoop
- 使用brew安裝hadoop
brew install hadoop
(如果沒有裝過brew,參考另一篇brew安裝文章)
- 修改hadoop配置
- 修改core-site.xml,配置集羣全局參數,如臨時文件夾,文件系統主機和端口
vi /usr/local/Cellar/hadoop/3.2.1/libexec/etc/hadoop/core-site.xml
向configuration中增加下述內容
此處修改的hadoop.tmp.dir是hadoop文件存放的目錄,一般在這個目錄下dfs文件夾裏存放namenode和datanode數據
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/usr/local/Cellar/hadoop/3.2.1/libexec/tmp</value>
</property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:8020</value>
</property>
</configuration>
- 配置hdfs-site.xml,指定NameNode的一些信息。同目錄下執行
vi hdfs-site.xml
更新爲下面的文件
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/usr/local/Cellar/hadoop/3.2.1/libexec/tmp/dfs/name</value>
</property>
<property>
<name>dfs.namenode.data.dir</name>
<value>file:/usr/local/Cellar/hadoop/3.2.1/libexec/tmp/dfs/data</value>
</property>
</configuration>
- 配置 mapred-site.xml,指定yarn作爲資源管理
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
- 配置yarn-site.xml,指定資源管理的參數
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.env-whitelist</name>
<value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value>
</property>
</configuration>
- 把hadoop加入系統環境變量
vi ~/.bash_profile
export HADOOP_HOME=/usr/local/Cellar/hadoop/3.2.1/libexec
export HADOOP_ROOT_LOGGER=DEBUG,console
export PATH=$PATH:${HADOOP_HOME}/bin
source ~/.bash_profile
- 格式化HDFS
cd /usr/local/Cellar/hadoop/3.2.1/bin
./hdfs namenode -format
- 啓動hadoop
腳本在sbin文件夾中,下面啓動所有服務
start-all.sh
停止可使用
stop-dfs.sh
查看服務
jps
看看服務是否全了
67059 NameNode
67636 NodeManager
67303 SecondaryNameNode
64919
67160 DataNode
68619 Jps
12829 Main
67534 ResourceManager
查看NameNode
http://localhost:9870/
查看資源管理器
http://localhost:8088
參考文章
https://blog.csdn.net/vbirdbest/article/details/88189753
https://www.jianshu.com/p/af8a50f5a653