Hadoop2.7.1——64位源码编译

Hadoop2.7.1——64位源码编译

文档下载地址: http://download.csdn.net/detail/hanxindere/9153021

软件环境:

CentOS6.5 64位,

jdk1.7.02,64位

maven3.2.3,

ant1.9.4.

protobuf-2.5.0.tar.gz

所有软件下载:http://yunpan.cn/cH5ebUqNPC6Be   提取码 2c57



注意事项:

Ø  内存小于1G时,一定要有swap分区,创建swap分区,见下面操作。

Ø  maven版本不要用太新的,有时会出现问题

Ø  jdk用1.7-1.7.45中间版本,64位的。(高于jdk1.7.45可能在Hadoop-common/security会出现问题),各个版本JDK下载地址如下:http://www.oracle.com/technetwork/java/archive-139210.html

Ø  下面的基本软件安装一定要先安装。

一、基本软件安装

安装基本环境

yum -y install  svn  ncurses-devel   gcc*

yum -y install lzo-devel zlib-develautoconf    automake    libtool   cmake     openssl-devel

 

二、软件安装

1) jdk1.7.02(高于jdk1.7.45可能在Hadoop-common/security会出现问题,编译Hadoop2.7.1源码时会报很多错误)

2) Maven 3.0或更新版本

3) ProtocolBuffer 2.5.0 

4) Findbugs 1.3.9,可选的(本文编译时未安装)

5)ant1.9.4

 

本文以root用户在/root目录下进行安装,但实际可以选择非root用户及非/root目录进行安装。

2.1. 安装ProtocolBuffer

 

标准的automake编译安装方式:

1) cd /root

2) tar xzf protobuf-2.5.0.tar.gz

3) cd protobuf-2.5.0

4) ./conigure --prefix=/root/protobuf

5) make

6) make install

 

2.2. 安装JDK

1) cd /root

2) tar xzf jdk-7u02-linux-x64.gz

3) cd jdk1.7.0_55

4) ln -s jdk1.7.0_02 jdk

 

2.3. 安装Maven

1) cd /root

2) tar xzf apache-maven-3.0.5-bin.tar.gz

3) ln -s apache-maven-3.0.5 maven

 

2.4. 安装ANT

1) cd /root

2) tar xzf apache-ant-1.9.4-bin.tar.gz

3) ln -s apache-maven-1.9.4 ant

 

2.5. 环境变量设置

在安装好之后,还需要设置一下环境变量,可以修改/etc/profile,也可以是修改~/.profile,增加如下内容:

#set environment

export JAVA_HOME=/opt/jdk

export HADOOP_HOME=/opt/hadoop

export MAVEN_HOME=/opt/maven

#防止内存不足

export MAVEN_OPTS="-Xms512m-Xmx1024m"

export JAVA_OPTS="-Xms512m-Xmx1024m"

export ANT_HOME=/opt/ant

export FINDBUGS_HOME=/opt/findbugs

 

exportPATH=$FINDBUGS_HOME/bin:$ANT_HOME/bin:$MAVEN_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH

exportCLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/lib

 

3. 编译Hadoop源代码

若是主机的内存小1G时,创建SWAP分区来缓存。

用命令free -m 查看是否有swap分区,

若没有swap分区,创建分区如下:

一、决定修改swap大小,首先在空间合适处创建用于分区的swap文件:如/swap1

    #dd if=/dev/zero of=/swap1 bs=1M count=2048

         if表示 infile,of 表示outfile,bs=1M代表增加的模块大小,count=2048代表2048个模块,也就是2G空间

二、将目的文件设置为swap分区文件:

     #mkswap /swap1

三、激活swap,立即启用交换分区文件:

    #swapon /swap1

四、在/etc/fstab文件中加入下面这样一行:

 /swap1  swap      swap    defaults   0 0


在编译前修改hadoop-src-2.7.1/hadoop-common-project/hadoop-auth/pom.xml

         <dependency>

     <groupId>org.mortbay.jetty</groupId>

     <artifactId>jetty-util</artifactId>

     <scope>test</scope>

   </dependency>

   <dependency>

     <groupId>org.mortbay.jetty</groupId>

     <artifactId>jetty</artifactId>               //修改此行

     <scope>test</scope>

   </dependency>

 

完成上述准备工作后,即可通过执行命令:

mvn package -Pdist -DskipTests –Dtar

若是MVN下载速度太慢,可以将mvn3.2.2_hadoop2.7.1.tar.gz 文件中的内容放入~/.m2/ repository即可,建议用以上360云盘中所出软件的版本。

编译成功后,会生成Hadoop二进制安装包hadoop-2.7.1.tar.gz,放在源代码的hadoop-dist/target子目录下:

 

[INFO] Apache Hadoop Main................................. SUCCESS [ 4.075 s]

[INFO] Apache Hadoop Project POM.......................... SUCCESS [ 2.469 s]

[INFO] Apache Hadoop Annotations.......................... SUCCESS [ 6.505 s]

[INFO] Apache Hadoop Assemblies........................... SUCCESS [ 0.309 s]

[INFO] Apache Hadoop Project Dist POM..................... SUCCESS [  2.699 s]

[INFO] Apache Hadoop Maven Plugins........................ SUCCESS [  5.294s]

[INFO] Apache Hadoop MiniKDC.............................. SUCCESS [ 6.434 s]

[INFO] Apache Hadoop Auth .................................SUCCESS [ 10.735 s]

[INFO] Apache Hadoop Auth Examples........................ SUCCESS [  4.638s]

[INFO] Apache Hadoop Common............................... SUCCESS [02:52 min]

[INFO] Apache Hadoop NFS.................................. SUCCESS [ 14.563 s]

[INFO] Apache Hadoop KMS.................................. SUCCESS [ 17.671 s]

[INFO] Apache Hadoop Common Project....................... SUCCESS [  0.047s]

[INFO] Apache Hadoop HDFS................................. SUCCESS [05:23 min]

[INFO] Apache Hadoop HttpFS............................... SUCCESS [01:04 min]

[INFO] Apache Hadoop HDFS BookKeeperJournal .............. SUCCESS [02:09 min]

[INFO] Apache Hadoop HDFS-NFS............................. SUCCESS [ 7.087 s]

[INFO] Apache Hadoop HDFS Project......................... SUCCESS [ 0.067 s]

[INFO] hadoop-yarn........................................ SUCCESS [  0.041 s]

[INFO] hadoop-yarn-api.................................... SUCCESS [02:32 min]

[INFO] hadoop-yarn-common................................. SUCCESS [33:53 min]

[INFO] hadoop-yarn-server................................. SUCCESS [ 0.295 s]

[INFO] hadoop-yarn-server-common.......................... SUCCESS [ 23.604 s]

[INFO] hadoop-yarn-server-nodemanager..................... SUCCESS [ 35.011 s]

[INFO] hadoop-yarn-server-web-proxy....................... SUCCESS [  6.994s]

[INFO]hadoop-yarn-server-applicationhistoryservice ....... SUCCESS [ 14.614 s]

[INFO] hadoop-yarn-server-resourcemanager................. SUCCESS [ 40.431 s]

[INFO] hadoop-yarn-server-tests........................... SUCCESS [ 10.644 s]

[INFO] hadoop-yarn-client................................. SUCCESS [ 13.446 s]

[INFO]hadoop-yarn-server-sharedcachemanager .............. SUCCESS [  5.840 s]

[INFO] hadoop-yarn-applications........................... SUCCESS [ 0.074 s]

[INFO]hadoop-yarn-applications-distributedshell .......... SUCCESS [  3.636 s]

[INFO]hadoop-yarn-applications-unmanaged-am-launcher ..... SUCCESS [  3.179 s]

[INFO] hadoop-yarn-site................................... SUCCESS [ 0.079 s]

[INFO] hadoop-yarn-registry............................... SUCCESS [ 9.001 s]

[INFO] hadoop-yarn-project................................ SUCCESS [ 7.124 s]

[INFO] hadoop-mapreduce-client............................ SUCCESS [ 0.116 s]

[INFO] hadoop-mapreduce-client-core....................... SUCCESS [ 37.457 s]

[INFO] hadoop-mapreduce-client-common..................... SUCCESS [ 34.469 s]

[INFO] hadoop-mapreduce-client-shuffle.................... SUCCESS [  8.019 s]

[INFO] hadoop-mapreduce-client-app........................ SUCCESS [ 15.131 s]

[INFO] hadoop-mapreduce-client-hs......................... SUCCESS [ 9.849 s]

[INFO] hadoop-mapreduce-client-jobclient ..................SUCCESS [17:20 min]

[INFO] hadoop-mapreduce-client-hs-plugins................. SUCCESS [  3.078 s]

[INFO] Apache Hadoop MapReduce Examples................... SUCCESS [  9.110 s]

[INFO] hadoop-mapreduce................................... SUCCESS [ 5.265 s]

[INFO] Apache Hadoop MapReduce Streaming.................. SUCCESS [03:17 min]

[INFO] Apache Hadoop Distributed Copy..................... SUCCESS [08:16 min]

[INFO] Apache Hadoop Archives............................. SUCCESS [ 3.757 s]

[INFO] Apache Hadoop Rumen................................ SUCCESS [ 8.926 s]

[INFO] Apache Hadoop Gridmix.............................. SUCCESS [ 7.391 s]

[INFO] Apache Hadoop Data Join............................ SUCCESS [ 5.499 s]

[INFO] Apache Hadoop Ant Tasks............................ SUCCESS [ 3.722 s]

[INFO] Apache Hadoop Extras............................... SUCCESS [ 5.240 s]

[INFO] Apache Hadoop Pipes................................ SUCCESS [ 12.424 s]

[INFO] Apache Hadoop OpenStack support.................... SUCCESS [  8.173 s]

[INFO] Apache Hadoop Amazon Web Servicessupport .......... SUCCESS [30:11 min]

[INFO] Apache Hadoop Azure support........................ SUCCESS [10:52 min]

[INFO] Apache Hadoop Client ...............................SUCCESS [ 14.450 s]

[INFO] Apache Hadoop Mini-Cluster......................... SUCCESS [ 0.290 s]

[INFO] Apache Hadoop Scheduler LoadSimulator ............. SUCCESS [  8.773s]

[INFO] Apache Hadoop Tools Dist........................... SUCCESS [ 15.627 s]

[INFO] Apache Hadoop Tools................................ SUCCESS [ 0.026 s]

[INFO] Apache Hadoop Distribution......................... SUCCESS [01:14 min]

[INFO]------------------------------------------------------------------------

[INFO] BUILD SUCCESS

[INFO]------------------------------------------------------------------------

[INFO] Total time: 02:07 h

[INFO] Finished at:2015-10-02T14:10:50+08:00

[INFO] Final Memory: 116M/494M

[INFO] ------------------------------------------------------------------------

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章