9、Hadoop序列化(自定义传输对象)


序列化就是把内存中的对象转化成字节序列,便于网络间传输和持久化到硬盘上,避免数据掉电丢失。
在Haoop中定义的最常用的基本对象,都已经实现了org.apache.hadoop.io.Writable接口,比如BooleanWritable、ByteWritable、IntWritable、FloatWritable、LongWritable、DoubleWritable、Text、MapWritable、ArrayWritable等对象,这些对象都可以在Mapper和Reducer之间进行数据序列化传输或持久到磁盘中,因此我们可以自定义对象,实现Writable接口,便可实现同样功能。

示例:有一个文本user.txt,每条记录登记了一个工人id、性别、单位小时劳动力价格,以及时长,有的工人会做多分工作,因此有多条记录。下面统计出每个工人id对应的性别和总金额。user.txt内容如下:

12001	male	10	5
12002	female	8	7
12003	male	15	5
12004	male	12	10
12005	female	7	12
12003	male	16	5

首先建立maven工程,pom配置参考文章

1、建立输入数据对应的bean

建立User的bean,实现Writable接口,需要重写两个方法write(序列化方法)、readFields(反序列化方法),写序列化方法和发序列化方法的写入和读取的顺序必须一致,示例如下:

package com.lzj.hadoop.serialize;

import java.io.DataInput;
import java.io.DataOutput;
import java.io.IOException;

import org.apache.hadoop.io.Writable;

/*实现writable接口*/
public class User implements Writable {

	private String sex;
	private int amount;
	
	/*空参构造函数,反序列化时调用	*/
	public User() {
		super();
	}
	
	/*写序列化方法*/
	@Override
	public void write(DataOutput out) throws IOException {
		out.writeUTF(sex);
		out.writeInt(amount);
	}

	/*反序列化,反序列化必须与读序列化的方法一致*/
	@Override
	public void readFields(DataInput in) throws IOException {
		this.sex = in.readUTF();
		this.amount = in.readInt();
	}

	@Override
	public String toString() {
		return sex + "\t" + "\t" + amount;
	}

	public String getSex() {
		return sex;
	}

	public void setSex(String sex) {
		this.sex = sex;
	}

	public int getAmount() {
		return amount;
	}

	public void setAmount(int amount) {
		this.amount = amount;
	}
	
}

2、建立Mapper分割处理数据

package com.lzj.hadoop.serialize;

import java.io.IOException;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;

public class UserMapper extends Mapper<LongWritable, Text, Text, User>{

	Text k = new Text();
	User v = new User();
	
	@Override
	protected void map(LongWritable key, Text value, Context context)
			throws IOException, InterruptedException {
		/*1、获取一行*/
		String line = value.toString();
		
		/*2、切割字段*/
		String[] fields = line.split("\t");
		
		/*3、取出用户id作为key*/
		String userId = fields[0];
		
		/*4、取出用户单价和时长,求总金额*/
		int price = Integer.valueOf(fields[2]);
		int hours = Integer.valueOf(fields[3]);
		int amount = price * hours;
		
		/*5、设置输出键值对*/
		k.set(userId); 			//设置键
		v.setSex(fields[1]);
		v.setAmount(amount);
		context.write(k, v);
	}
}

3、建立Reducer合并数据

package com.lzj.hadoop.serialize;

import java.io.IOException;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;

public class UserReducer extends Reducer<Text, User, Text, User>{
	@Override
	protected void reduce(Text key, Iterable<User> values, Context context)
			throws IOException, InterruptedException {
		int amount = 0;
		
		/*遍历获取总金额*/
		String sex = null;
		for(User u : values) {
			amount = amount + u.getAmount();
			sex = u.getSex();
		}
		
		/*封装Reducer输出对象*/
		User user = new User();
		user.setSex(sex);
		user.setAmount(amount);
		context.write(key, user);
	}
}

4、建立job的启动类

package com.lzj.hadoop.serialize;

import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

public class UserDriver {
	public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {
		/*获取job的配置信息*/
		Configuration config = new Configuration();
		Job job = Job.getInstance(config);
		
		/*指定jar的启动类*/
		job.setJarByClass(UserDriver.class);
		
		/*指定关联的mapper/reducer类*/
		job.setMapperClass(UserMapper.class);
		job.setReducerClass(UserReducer.class);
		
		/*指定Mapper输出数据的KV类型*/
		job.setMapOutputKeyClass(Text.class);
		job.setMapOutputValueClass(User.class);
		
		/*指定最终的输出数据KV类型*/
		job.setOutputKeyClass(Text.class);
		job.setOutputValueClass(User.class);
		
		/*设定job的输入和输出路径*/
		FileInputFormat.setInputPaths(job, new Path("D:/tmp/user.txt"));
		FileOutputFormat.setOutputPath(job, new Path("D:/tmp/userOut"));
		
		/*提交任务*/
		boolean flag = job.waitForCompletion(true);
		System.out.println(flag);
	}
}

5、测试

运行job的启动类UserDriver,输出结果如下:

12001	male		50
12002	female		56
12003	male		155
12004	male		120
12005	female		84
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章