有兩種方法:方法一需要借用eclipse自己編寫代碼,優點是有助於理解mapreduce,缺點複雜。方法二可以直接調用Hadoop本身自帶的jar包,優點是方便,缺點是無法深刻理解mapreduce的過程。建議都實驗一下。
方法一
需要先配置好eclipse,關於eclipse上Hadoop的配置請參考:
大數據入門(七)win10上eclipse使用Hadoop的配置
上傳文件到hdfs
可以直接右擊upload也可以參考:大數據入門(六)win10對Hadoop hdfs的基本操作(傳送門)
java project
新建java project:
其中WordCount.java【這個是參考了windows10上使用Eclipse配置Hadoop開發環境詳細步驟+WordCount示例】:
package word_count_pag;
import java.io.IOException;
import java.util.StringTokenizer;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.GenericOptionsParser;
public class WordCount {
public static class TokenizerMapper extends Mapper<Object, Text, Text, IntWritable> {
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(Object key, Text value, Context context) throws IOException, InterruptedException {
StringTokenizer itr = new StringTokenizer(value.toString());
while (itr.hasMoreTokens()) {
word.set(itr.nextToken());
context.write(word, one);
}
}
}
public static class IntSumReducer extends Reducer<Text, IntWritable, Text, IntWritable> {
private IntWritable result = new IntWritable();
public void reduce(Text key, Iterable<IntWritable> values, Context context)
throws IOException, InterruptedException {
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
result.set(sum);
context.write(key, result);
}
}
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();
if (otherArgs.length != 2) {
System.err.println("Usage: wordcount <in> <out>");
System.exit(2);
}
@SuppressWarnings("deprecation")
Job job = new Job(conf, "word count");
job.setJarByClass(WordCount.class);
job.setMapperClass(TokenizerMapper.class);
job.setCombinerClass(IntSumReducer.class);
job.setReducerClass(IntSumReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(job, new Path(otherArgs[0]));
FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
log4j.properties(在src下)【這個是參考了windows10上使用Eclipse配置Hadoop開發環境詳細步驟+WordCount示例】:
# Configure logging for testing:optionally with log file
#log4j.rootLogger=debug,appender
log4j.rootLogger=info,appender
#log4j.rootLogger=error,appender
#\u8F93\u51FA\u5230\u63A7\u5236\u53F0
log4j.appender.appender=org.apache.log4j.ConsoleAppender
#\u6837\u5F0F\u4E3ATTCCLayout
log4j.appender.appender.layout=org.apache.log4j.TTCCLayout
選中wordcount.java 然後:
1、run-》run as -》java application
2、run-》run configuration
成功:
方法二
管理員的方式啓動cmd(否則之後使用wordcount會失敗),中輸入:start-all.cmd
啓動Hadoop,jps
檢查
傳送門:
【查看運行狀態的傳送門】
【查看結果的傳送門】
參考
windows10上使用Eclipse配置Hadoop開發環境詳細步驟+WordCount示例
系列:
大數據入門(一)環境搭建,VMware15+CentOS8.1 配置
https://blog.csdn.net/qq_34391511/article/details/104874044
大數據入門(二)Centos8,JDK 配置
https://blog.csdn.net/qq_34391511/article/details/104893587
大數據入門(三)CentOS 網絡配置
https://blog.csdn.net/qq_34391511/article/details/104895498
大數據入門(四)Hadoop 集羣搭建
https://blog.csdn.net/qq_34391511/article/details/104885278
大數據入門(五)windows 上搭建單機版 Hadoop2.8(踩坑記錄)
https://blog.csdn.net/qq_34391511/article/details/104948319
大數據入門(六)win10 對 Hadoop hdfs 的基本操作
https://blog.csdn.net/qq_34391511/article/details/105070955
大數據入門(七)win10 上 eclipse 使用 Hadoop 的配置
https://blog.csdn.net/qq_34391511/article/details/105066667
大數據入門(八)win10 下的 wordcount
https://blog.csdn.net/qq_34391511/article/details/105073076
大數據入門(九)基於 win10 的 Hadoop,java 代碼進行 hdfs 操作
https://blog.csdn.net/qq_34391511/article/details/105145380