初學者往往會走很多的彎路,有很多的地方再清晰明白的也會拋出很多異常。
這兩天在自己的java虛擬機上安裝好了hadoop-1.2.1版的hadoop,然後按照hadoop權威指南上面的入門級的例子敲了一下,想將其放在自己搭建的hadoop上面運行起來。可是就是這麼一個小小的例子卻也有很多異常拋出的,這裏現在還沒明白書上的例子是怎麼運行成功的。
下面是所敲的代碼:
package LFS.Hadoop.First.simpleTest;
import java.io.IOException;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.MapReduceBase;
import org.apache.hadoop.mapred.Mapper;
import org.apache.hadoop.mapred.OutputCollector;
import org.apache.hadoop.mapred.Reporter;
public class SimpleTestMapper extends MapReduceBase implements
Mapper<LongWritable, Text, LongWritable, IntWritable> {
public void map(LongWritable inputKey, Text inputValue,
OutputCollector<LongWritable, IntWritable> outputCollecter, Reporter reporter)
throws IOException {
//這裏面做事情了,只要按照正常思路編程就好
String line = inputValue.toString();//獲取一行數據
//分割數據
String []words = line.split(":");
String SourceKey[] = words[1].split("p");
long outputKey = Long.parseLong(SourceKey[0].substring(1,SourceKey[0].length()-1));
int outputValue= Integer.parseInt(words[2].substring(2));
outputCollecter.collect(new LongWritable(outputKey)
, new IntWritable(outputValue));
//map 函數就這麼寫完了,就這麼簡單?yes,就這麼簡單了
}
}
package LFS.Hadoop.First.simpleTest;
import java.io.IOException;
import java.util.Iterator;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.mapred.MapReduceBase;
import org.apache.hadoop.mapred.OutputCollector;
import org.apache.hadoop.mapred.Reducer;
import org.apache.hadoop.mapred.Reporter;
public class SimpleTestReducer extends MapReduceBase implements
Reducer<LongWritable, IntWritable, LongWritable, IntWritable> {
//reduce函數就是將map從文件中映射的數據進行處理,變成我們想要的結果放起來
public void reduce(LongWritable inputKey, Iterator<IntWritable> inputValue,
OutputCollector<LongWritable, IntWritable> outputCollector, Reporter reporter)
throws IOException {
//我想做的是找到一個最大值,而不是找到一個key的最大值,目前先這樣子寫吧
//找到一個key最大的value值
int maxValue = Integer.MIN_VALUE;
while(inputValue.hasNext())
{
IntWritable curValue = inputValue.next();
maxValue = Math.max(maxValue, curValue.get());
}
//將這個值寫入到輸出文件
outputCollector.collect(inputKey, new IntWritable(maxValue));
}
}
package LFS.Hadoop.First.simpleTest;
import java.io.IOException;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.mapred.FileInputFormat;
import org.apache.hadoop.mapred.FileOutputFormat;
import org.apache.hadoop.mapred.JobClient;
import org.apache.hadoop.mapred.JobConf;
public class SimpleTestStart {
public static void main(String args[]) {
JobConf jobConf = new JobConf(SimpleTestStart.class);
<span style="white-space:pre"> <span style="color:#ff0000;"> </span></span><span style="color:#ff0000;">//String jarName = args[2];
<span style="white-space:pre"> </span>//jobConf.setJar(jarName);</span>
jobConf.setJobName("simple test");
Path inputPath = new Path(args[0]);
FileInputFormat.addInputPath(jobConf, inputPath);
Path outputPath = new Path(args[1]);
FileOutputFormat.setOutputPath(jobConf, outputPath);
jobConf.setMapperClass(SimpleTestMapper.class);
jobConf.setReducerClass(SimpleTestReducer.class);
jobConf.setOutputKeyClass(LongWritable.class);
jobConf.setOutputValueClass(IntWritable.class);
try {
JobClient.runJob(jobConf);
} catch (IOException e) {
e.printStackTrace();
}
}
}
Caused by: java\.lang\.RuntimeException: java\.lang\.RuntimeException: java\.lang\.ClassNotFoundException: LFS\.Hadoop\.First\.simpleTest\.SimpleTestMapper
at org\.apache\.hadoop\.conf\.Configuration\.getClass(Configuration\.java:889)
at org\.apache\.hadoop\.mapred\.JobConf\.getMapperClass(JobConf\.java:968)
at org\.apache\.hadoop\.mapred\.MapRunner\.configure(MapRunner\.java:34)
\.\.\. 14 more
Caused by: java\.lang\.RuntimeException: java\.lang\.ClassNotFoundException: LFS\.Hadoop\.First\.simpleTest\.SimpleTestMapper
at org\.apache\.hadoop\.conf\.Configuration\.getClass(Configuration\.java:857)
at org\.apache\.hadoop\.conf\.Configuration\.getClass(Configuration\.java:881)
\.\.\. 16 more
Caused by: java\.lang\.ClassNotFoundException: LFS\.Hadoop\.First\.simpleTest\.SimpleTestMapper
這個類的class明明我就放在了jar包中啊,爲什麼它還給我報錯呢?開始網上尋找解決辦法,一搜一大堆啊。大致都是hadoop的尋找類的絕對路徑的問題。剛開始採納了什麼export HADOOP_CLASSPATH=jar包放在的路徑,可是沒有成功,然後採用setJarbyClass的方法,還是沒有成功。最後採用setjar的方法成功。就是上面代碼紅色標記的代碼,將註釋去掉才能夠運行成功。
2.還有一個就是給數據的地址,很多時候你傳入地址不是一個絕對路徑時往往會出現filenotfound這樣的錯誤。你一看路徑會發現hadoop運行時給你組裝的路徑完全不是你想要的,如你傳了一個 /test/pp.cvs,但是hadoop會給你的裝出一個:hdfs://127.0.0.1:9000/user/username/test/pp.cvs. 嗯,這是hadoop會默認的將你設定的相對路徑上加上hdfs://127.0.0.1:9000/user/username的。要想明確好自己的路徑請加絕對的路徑hdfs://127.0.0.1:9000/...
這是這兩天遇到的問題,這裏記錄一下,以備以後有用。