svn co http://svn.apache.org/repos/asf/hadoop/common/tags/release-2.0.0-alpha xxx
- linux: 參考$HADOOP_HOME/BUILDING.txt 及 $HADOOP_HOME/hadoop-mapreduce-project/INSTALL(ps,我開始沒有注意這兩個文件,所以我以下都是一步一步調試,糾結,痛苦中寫出來了。)
- 裝protoc buffer http://protobuf.googlecode.com/files/protobuf-2.4.1.tar.gz
- mvn install -Dmaven.skip.test即可。
-
產生部署tar包:mvn install -Dtar -Pdist -Dmaven.javadoc.skip=true -DskipTests=true
- win:
- 在win下面,首先下載cygwin:http://www.cygwin.com/ 在path中設置路徑。
- 下載proto buffer http://code.google.com/p/protobuf/downloads/list如路徑:http://code.google.com/p/protobuf/downloads/detail?name=protoc-2.4.1-win32.zip&can=2&q= 解壓後把protoc.exe放在cygwin根目錄的bin目錄下面。
- 打上patch:https://issues.apache.org/jira/browse/MAPREDUCE-3881#comment-13478928 就是修改hadoop-mapreduce-project\hadoop-yarn\hadoop-yarn-common\pom.xml:https://issues.apache.org/jira/secure/attachment/12515081/pom.xml.patch
- mvn install -Dmaven.skip.test即可。
遇到的一些問題:附上最後的conf目錄:https://github.com/lwwcl1314/apollo/tree/master/distrubutescript/conf-hadoop-2.0.0-alpha
- 在eclipse中導入工程,在3.7版本中,出現通過f4找不到ClientRMProtocol的實現類ClientRMService。但是在3.6eclipse中可以。這個比較奇怪,讓我不得不用3.6eclipse。
- mvn install -Dtar -Pdist -Dmaven.javadoc.skip=true -Dmaven.test.skip=true打包過程中,出現:
此我對hadoop-assemblies工程下的hadoop-mapreduce-dist.xml修改,開始我直接刪除了moduleSets導致了,啓動yarn報錯。後我把<attachmentClassifier>tests</attachmentClassifier>註釋掉了,我現在還不知道這個有啥左右。[ERROR] Failed to execute goal org.apache.maven.plugins:maven-assembly-plugin:2.3:single (package-mapreduce) on project hadoop-mapreduce: Assembly is incorrectly configured: hadoop-mapreduce-dist: Assembly is incorrectly configured: hadoop-mapreduce-dist: [ERROR] Assembly: hadoop-mapreduce-dist is not configured correctly: Cannot find attachment with classifier: tests in module project: org.apache.hadoop:hadoop-mapreduce-client-jobclient:jar:2.0.0-alpha. Please exclude this module from the module-set.
- 補充下,找到原因了。其實就是 -Dmaven.test.skip=true會跳過測試資源編譯,這個測試會在maven-assembly-plugin用到。所以報錯了。應該該用爲-DskipTests=true。
- hdfs格式化後,啓動hdfs後。java.io.IOException: Incompatible clusterIDs in /tmp/hadoop-yarn/dfs/data: namenode clusterID = CID-2917fbbc-dcd7-40f3-8a1f-f222c3941fc1; datanode clusterID = CID-d53b270a-893f-4fa5-b5e5-1e5ed5ee4e86,需要把各個DN下/tmp/hadoop-yarn/dfs/ 下的文件全部刪除。
- 啓動yarn 報錯。
這是因爲打包少了包的原因,主要是我自定對hadoop-assemblies工程下的hadoop-mapreduce-dist.xml修改的緣故。[yarn@hd19-vm1 sbin]$ ./start-yarn.sh starting yarn daemons starting resourcemanager, logging to /home/yarn/hadoop-2.0.0-alpha/logs/yarn-yarn-resourcemanager-hd19-vm1.yunti.yh.aliyun.com.out Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/yarn/server/resourcemanager/ResourceManager Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.yarn.server.resourcemanager.ResourceManager at java.net.URLClassLoader$1.run(URLClassLoader.java:217)
- mr始終在本地運行,因爲配置問題,改爲yarn的。
<property> <name>mapreduce.framework.name</name> <value>local</value> <description>The runtime framework for executing MapReduce jobs. Can be one of local, classic or yarn. </description> </property>
- java.lang.IllegalStateException: Invalid shuffle port number -1 returned fo,這個問題我查了很久,主要是yarn對調試比較糾結。
解決方法其實很簡單(框架也不默認下)是因爲:yarn.nodemanager.aux-services沒有配置mapreduce.shuffleContainer launch failed for container_1350803453228_0002_01_000004 : java.lang.IllegalStateException: Invalid shuffle port number -1 returned for attempt_1350803453228_0002_r_000000_0 at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$Container.launch(ContainerLauncherImpl.java:162) at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$EventProcessor.run(ContainerLauncherImpl.java:373) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:636)
<property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce.shuffle</value> </property>