Mac上通過僞分佈方式部署Hadoop 2.7.2


轉自:http://blog.csdn.net/cdut100/article/details/51813481

1. 無密碼登錄localhost的設置

1. ssh-keygen -t rsa
Press enter for each line 
2. cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
3. chmod og-wx ~/.ssh/authorized_keys
2. 僞分佈方式配置hadoop:

一、檢測JDK版本

1、java -version

k-MacBook-Pro:~ $ java -version

java version "1.8.0_60"

Java(TM) SE Runtime Environment (build 1.8.0_60-b27)

Java HotSpot(TM) 64-Bit Server VM (build 25.60-b23, mixed mode)


2、若終端無法識別java命令,在百度或google上搜索jdk,一般安裝路徑爲:

export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_60.jdk/Contents/Home


二、下載Hadoop:http://hadoop.apache.org/releases.html

1、下載好hadoop後,解壓到任意工程目錄

export HADOOP_HOME=/Users/k/hadoop-2.7.2


2、進入hadoop配置目錄

/Users/k/hadoop-2.7.2/etc/hadoop


3、vim hadoop-env.sh(配置hadoop環境)

export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_60.jdk/Contents/Home

export HADOOP_HEAPSIZE=2000

export HADOOP_OPTS="-Djava.security.krb5.realm=OX.AC.UK -Djava.security.krb5.kdc=kdc0.ox.ac.uk:kdc1.ox.ac.uk"


4、vim core-site.xml(配置NameNode主機名與端口)

<configuration>

    <property>

        <name>hadoop.tmp.dir</name>

        <value>/Users/k/hadoop-2.7.2/tmp/hadoop-${user.name}</value>

        <description>A base for other temporary directories.</description>

    </property>

    <property>

        <name>fs.default.name</name>

        <value>hdfs://localhost:8000</value>

    </property>

</configuration>


5、vim hdfs-site.xml(配置HDFS的默認參數副本數)

<configuration>

    <property>

        <name>dfs.replication</name>

        <value>1</value>

    </property>

</configuration>


6、vim mapred-site.xml(配置JobTracker主機名與端口)

<configuration>

    <property>

        <name>mapred.job.tracker</name>

        <value>hdfs://localhost:9000</value>

    </property>

    <property>

        <name>mapred.tasktracker.map.tasks.maximum</name>

        <value>2</value>

    </property>

    <property>

        <name>mapred.tasktracker.reduce.tasks.maximum</name>

        <value>2</value>

    </property>

</configuration>


7、vim yarn-site.xml

<configuration>

    <property>

        <name>yarn.nodemanager.aux-services</name>

        <value>mapreduce_shuffle</value>

    </property>

</configuration>


8、安裝HDFS並格式化namenode

k-MacBook-Pro:hadoop k$ hdfs namenode -format

16/07/03 01:35:54 INFO namenode.NameNode: STARTUP_MSG: 

/************************************************************


9、啓動Hadoop

k-MacBook-Pro:hadoop k$ start-all.sh 

This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh

16/07/03 01:31:20 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

Starting namenodes on [localhost]

localhost: namenode running as process 1325. Stop it first.

localhost: starting datanode, logging to /Users/kirogi/hadoop-2.7.2/logs/hadoop-kirogi-datanode-kirogis-MacBook-Pro.local.out

Starting secondary namenodes [0.0.0.0]

0.0.0.0: secondarynamenode running as process 882. Stop it first.

16/07/03 01:31:27 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

starting yarn daemons

resourcemanager running as process 994. Stop it first.

localhost: nodemanager running as process 1077. Stop it first.


10、驗證hadoop

k-MacBook-Pro:hadoop k$ jps

882 SecondaryNameNode

994 ResourceManager

1077 NodeManager

4312 Jps

1325 NameNode

或者

打開http://localhost:50070,進入hdfs管理頁面

打開http://localhost:8088,進入hadoop進程管理頁面

如何使用hadoop?

以WordCount爲例:

在根目錄下新建input文件夾,放入幾個文本文件。

當然,如果必要,可以清一下hdfs,然後重啓hadoop各個服務。具體命令行如下:

stop-all.sh 

hdfs namenode -format

start-all.sh 

hadoop fs -mkdir -p input     //*注意:這裏有個-p*

hadoop fs -ls

hadoop fs -put ~/input/*.txt input

hadoop jar /Users/bin.shen/BigData/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar wordcount input output

hadoop fs -cat output/part-r-00000



發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章