Hadoop 實戰之運行DataJoin

大家好,今天給大家介紹一下DataJoin,Hadoop有一個叫DataJoin的包爲Data Join提供相應的框架。它的Jar包存在於contrib/datajoin/hadoop-*-datajoin。

爲區別於其他的data join技術,我們稱其爲reduce-side join。(因爲我們在reducer上作大多數的工作)

reduce-side join引入了一些術語及概念:

1.Data Source:基本與關係數據庫中的表相似,形式爲:(例子中爲CSV格式)

      Customers                 Orders
      1,Stephanie Leung,555-555-5555      3,A,12.95,02-Jun-2008
      2,Edward Kim,123-456-7890         1,B,88.25,20-May-2008
      3,Jose Madriz,281-330-8004         2,C,32.00,30-Nov-2007
      4,David Stork,408-555-0000         3,D,25.02,22-Jan-2009

2.Tag:由於記錄類型(Customers或Orders)與記錄本身分離,標記一個Record會確保特殊元數據會一致存在於記錄中。在這個目的下,我們將使用每個record自身的Data source名稱標記每個record。

3.Group Key:Group Key類似於關係數據庫中的鏈接鍵(join key),在我們的例子中,group key就是Customer ID(第一列的3)。由於datajoin包允許用戶自定義group key,所以其較之關係數據庫中的join key更一般、平常。

利用datajoin包來實現join:
  Hadoop的datajoin包中有三個需要我們繼承的類:DataJoinMapperBase,DataJoinReducerBase,TaggedMapOutput。正如其名字一樣,我們的MapClass將會擴展DataJoinMapperBase,Reduce類會擴展DataJoinReducerBase。這個datajoin包已經實現了map()和reduce()方法,因此我們的子類只需要實現一些新方法來設置一些細節。

  在用DataJoinMapperBase和DataJoinReducerBase之前,我們需要弄清楚我們貫穿整個程序使用的新的虛數據類TaggedMapOutput。 

  根據之前我們在圖Advance MapReduce的數據流中所展示的那樣,mapper輸出一個包(由一個key和一個value(tagged record)組成)。datajoin包將key設置爲Text類型,將value設置爲TaggedMapOutput類型(TaggedMapOutput是一個將我們的記錄使用一個Text類型的tag包裝起來的數據類型)。它實現了getTag()和setTag(Text tag)方法。它還定義了一個getData()方法,我們的子類將實現這個方法來處理record記錄。我們並沒有明確地要求子類實現setData()方法,但我們最好還是實現這個方法以實現程序的對稱性(或者在構造函數中實現)。作爲Mapper的輸出,TaggedMapOutput需要是Writable類型,因此的子類還需要實現readFields()和write()方法。
DataJoinMapperBase:
  回憶join數據流圖,mapper的主要功能就是打包一個record使其能夠和其他擁有相同group key的記錄去向一個Reducer。DataJoinMapperBase完成所有的打包工作,這個類定義了三個虛類讓我們的子類實現:

  protected abstract Text generateInputTag(String inputFile);

  protected abstract TaggedMapOutput generateTaggedMapOutut(Object value);

  protected abstract Text generateGroupKey(TaggedMapOutput aRecored);  

  在一個map任務開始之前爲所有這個map任務會處理的記錄定義一個tag(Text),結果將保存到DataJoinMapperBase的inputTag變量中,我們也可以保存filename至inputFile變量中以待後用。
  在map任務初始化之後,DataJoinMapperBase的map()方法會對每一個記錄執行。它調用了兩個我們還沒有實現的虛方法:generateTaggedMapOutput()以及generateGroupKey(aRecord);(詳見代碼)
DataJoinReducerBase:
DataJoinMapperBase將我們所需要做的工作以一個full outer join的方式簡化。我們的Reducer子類只需要實現combine()方法來濾除掉我們不需要的組合來得到我們需要的(inner join, left outer join等)。同時我們也在combiner()中將我們的組合格式化爲輸出格式。

環境:Vmware 8.0 和Ubuntu11.04

第一步:首先創建一個工程命名爲HadoopTest.目錄結構如下圖:

第二步: 在/home/tanglg1987目錄下新建一個start.sh腳本文件,每次啓動虛擬機都要刪除/tmp目錄下的全部文件,重新格式化namenode,代碼如下:

sudo rm -rf /tmp/*
rm -rf /home/tanglg1987/hadoop-0.20.2/logs
hadoop namenode -format
hadoop datanode -format
start-all.sh
hadoop fs -mkdir input 
hadoop dfsadmin -safemode leave

 

第三步:給start.sh增加執行權限並啓動hadoop僞分佈式集羣,代碼如下:
chmod 777 /home/tanglg1987/start.sh
./start.sh 

執行過程如下:

第四步:上傳本地文件到hdfs

在/home/tanglg1987目錄下新建Order.txt內容如下:

3,A,12.95,02-Jun-2008
1,B,88.25,20-May-2008
2,C,32.00,30-Nov-2007
3,D,25.00,22-Jan-2009

在/home/tanglg1987目錄下新建Customer.txt內容如下:

1,tom,555-555-5555
2,white,123-456-7890
3,jerry,281-330-4563
4,tanglg,408-555-0000

上傳本地文件到hdfs:

hadoop fs -put /home/tanglg1987/Orders.txt input
hadoop fs -put /home/tanglg1987/Customer.txt input

第五步:新建一個DataJion.java,代碼如下:

package com.baison.action;
import java.io.DataInput;
import java.io.DataOutput;
import java.io.IOException;
import java.util.Iterator;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.io.Writable;
import org.apache.hadoop.mapred.FileInputFormat;
import org.apache.hadoop.mapred.FileOutputFormat;
import org.apache.hadoop.mapred.JobClient;
import org.apache.hadoop.mapred.JobConf;
import org.apache.hadoop.mapred.KeyValueTextInputFormat;
import org.apache.hadoop.mapred.MapReduceBase;
import org.apache.hadoop.mapred.Mapper;
import org.apache.hadoop.mapred.OutputCollector;
import org.apache.hadoop.mapred.Reducer;
import org.apache.hadoop.mapred.Reporter;
import org.apache.hadoop.mapred.TextInputFormat;
import org.apache.hadoop.mapred.TextOutputFormat;
import org.apache.hadoop.util.ReflectionUtils;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;
import org.apache.hadoop.contrib.utils.join.DataJoinMapperBase;
import org.apache.hadoop.contrib.utils.join.DataJoinReducerBase;
import org.apache.hadoop.contrib.utils.join.TaggedMapOutput;
public class DataJoin extends Configured implements Tool {
public static class MapClass extends DataJoinMapperBase {
protected Text generateInputTag(String inputFile) {
String datasource = inputFile.split("-")[0];
return new Text(datasource);
}
protected Text generateGroupKey(TaggedMapOutput aRecord) {
String line = ((Text) aRecord.getData()).toString();
String[] tokens = line.split(",");
String groupKey = tokens[0];
return new Text(groupKey);
}
protected TaggedMapOutput generateTaggedMapOutput(Object value) {
TaggedWritable retv = new TaggedWritable((Text) value);
retv.setTag(this.inputTag);
return retv;
}
}
public static class Reduce extends DataJoinReducerBase {
protected TaggedMapOutput combine(Object[] tags, Object[] values) {
if (tags.length < 2)
return null;
String joinedStr = "";
for (int i = 0; i < values.length; i++) {
if (i > 0)
joinedStr += ",";
TaggedWritable tw = (TaggedWritable) values[i];
String line = ((Text) tw.getData()).toString();
String[] tokens = line.split(",", 2);
joinedStr += tokens[1];
}
TaggedWritable retv = new TaggedWritable(new Text(joinedStr));
retv.setTag((Text) tags[0]);
return retv;
}
}
public static class TaggedWritable extends TaggedMapOutput {
private Writable data;
public TaggedWritable() {
this.tag = new Text();
}
public TaggedWritable(Writable data) {
this.tag = new Text("");
this.data = data;
}
public Writable getData() {
return data;
}
public void setData(Writable data) {
this.data = data;
}
public void write(DataOutput out) throws IOException {
this.tag.write(out);
out.writeUTF(this.data.getClass().getName());
this.data.write(out);
}
public void readFields(DataInput in) throws IOException {
this.tag.readFields(in);
String dataClz = in.readUTF();
if (this.data == null
|| !this.data.getClass().getName().equals(dataClz)) {
try {
this.data = (Writable) ReflectionUtils.newInstance(
Class.forName(dataClz), null);
} catch (ClassNotFoundException e) {
e.printStackTrace();
}
}
this.data.readFields(in);
}
}
public int run(String[] args) throws Exception {
for (String string : args) {
System.out.println(string);
}
Configuration conf = getConf();
JobConf job = new JobConf(conf, DataJoin.class);
Path in = new Path(args[0]);
Path out = new Path(args[1]);
FileInputFormat.setInputPaths(job, in);
FileOutputFormat.setOutputPath(job, out);
job.setJobName("DataJoin");
job.setMapperClass(MapClass.class);
job.setReducerClass(Reduce.class);
job.setInputFormat(TextInputFormat.class);
job.setOutputFormat(TextOutputFormat.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(TaggedWritable.class);
job.set("mapred.textoutputformat.separator", ",");
JobClient.runJob(job);
return 0;
}
public static void main(String[] args) throws Exception {
String[] arg = { "hdfs://localhost:9100/user/tanglg1987/input",
"hdfs://localhost:9100/user/tanglg1987/output" };
int res = ToolRunner.run(new Configuration(), new DataJoin(), arg);
System.exit(res);
}
}

第六步:Run On Hadoop,運行過程如下:

hdfs://localhost:9100/user/tanglg1987/input
hdfs://localhost:9100/user/tanglg1987/output
12/10/16 22:05:36 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
12/10/16 22:05:36 WARN mapred.JobClient: No job jar file set. User classes may not be found. See JobConf(Class) or JobConf#setJar(String).
12/10/16 22:05:36 INFO mapred.FileInputFormat: Total input paths to process : 2
12/10/16 22:05:36 INFO mapred.JobClient: Running job: job_local_0001
12/10/16 22:05:36 INFO mapred.FileInputFormat: Total input paths to process : 2
12/10/16 22:05:36 INFO mapred.MapTask: numReduceTasks: 1
12/10/16 22:05:36 INFO mapred.MapTask: io.sort.mb = 100
12/10/16 22:05:37 INFO mapred.MapTask: data buffer = 79691776/99614720
12/10/16 22:05:37 INFO mapred.MapTask: record buffer = 262144/327680
12/10/16 22:05:37 INFO mapred.MapTask: Starting flush of map output
12/10/16 22:05:37 INFO mapred.MapTask: Finished spill 0
12/10/16 22:05:37 INFO mapred.TaskRunner: Task:attempt_local_0001_m_000000_0 is done. And is in the process of commiting
12/10/16 22:05:37 INFO mapred.LocalJobRunner: collectedCount 4
totalCount 4
12/10/16 22:05:37 INFO mapred.TaskRunner: Task 'attempt_local_0001_m_000000_0' done.
12/10/16 22:05:37 INFO mapred.MapTask: numReduceTasks: 1
12/10/16 22:05:37 INFO mapred.MapTask: io.sort.mb = 100
12/10/16 22:05:37 INFO mapred.MapTask: data buffer = 79691776/99614720
12/10/16 22:05:37 INFO mapred.MapTask: record buffer = 262144/327680
12/10/16 22:05:37 INFO mapred.MapTask: Starting flush of map output
12/10/16 22:05:37 INFO mapred.MapTask: Finished spill 0
12/10/16 22:05:37 INFO mapred.TaskRunner: Task:attempt_local_0001_m_000001_0 is done. And is in the process of commiting
12/10/16 22:05:37 INFO mapred.LocalJobRunner: collectedCount 4
totalCount 4
12/10/16 22:05:37 INFO mapred.TaskRunner: Task 'attempt_local_0001_m_000001_0' done.
12/10/16 22:05:37 INFO mapred.LocalJobRunner:
12/10/16 22:05:37 INFO mapred.Merger: Merging 2 sorted segments
12/10/16 22:05:37 INFO mapred.Merger: Down to the last merge-pass, with 2 segments left of total size: 875 bytes
12/10/16 22:05:37 INFO mapred.LocalJobRunner:
12/10/16 22:05:37 INFO datajoin.job: key: 1 this.largestNumOfValues: 2
12/10/16 22:05:37 INFO datajoin.job: key: 3 this.largestNumOfValues: 3
12/10/16 22:05:37 INFO mapred.JobClient: map 100% reduce 0%
12/10/16 22:05:37 INFO mapred.TaskRunner: Task:attempt_local_0001_r_000000_0 is done. And is in the process of commiting
12/10/16 22:05:37 INFO mapred.LocalJobRunner:
12/10/16 22:05:37 INFO mapred.TaskRunner: Task attempt_local_0001_r_000000_0 is allowed to commit now
12/10/16 22:05:37 INFO mapred.FileOutputCommitter: Saved output of task 'attempt_local_0001_r_000000_0' to hdfs://localhost:9100/user/tanglg1987/output
12/10/16 22:05:37 INFO mapred.LocalJobRunner: actuallyCollectedCount 4
collectedCount 5
groupCount 4
> reduce
12/10/16 22:05:37 INFO mapred.TaskRunner: Task 'attempt_local_0001_r_000000_0' done.
12/10/16 22:05:38 INFO mapred.JobClient: map 100% reduce 100%
12/10/16 22:05:38 INFO mapred.JobClient: Job complete: job_local_0001
12/10/16 22:05:38 INFO mapred.JobClient: Counters: 15
12/10/16 22:05:38 INFO mapred.JobClient: FileSystemCounters
12/10/16 22:05:38 INFO mapred.JobClient: FILE_BYTES_READ=51466
12/10/16 22:05:38 INFO mapred.JobClient: HDFS_BYTES_READ=435
12/10/16 22:05:38 INFO mapred.JobClient: FILE_BYTES_WRITTEN=105007
12/10/16 22:05:38 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=162
12/10/16 22:05:38 INFO mapred.JobClient: Map-Reduce Framework
12/10/16 22:05:38 INFO mapred.JobClient: Reduce input groups=4
12/10/16 22:05:38 INFO mapred.JobClient: Combine output records=0
12/10/16 22:05:38 INFO mapred.JobClient: Map input records=8
12/10/16 22:05:38 INFO mapred.JobClient: Reduce shuffle bytes=0
12/10/16 22:05:38 INFO mapred.JobClient: Reduce output records=4
12/10/16 22:05:38 INFO mapred.JobClient: Spilled Records=16
12/10/16 22:05:38 INFO mapred.JobClient: Map output bytes=855
12/10/16 22:05:38 INFO mapred.JobClient: Map input bytes=175
12/10/16 22:05:38 INFO mapred.JobClient: Combine input records=0
12/10/16 22:05:38 INFO mapred.JobClient: Map output records=8
12/10/16 22:05:38 INFO mapred.JobClient: Reduce input records=8
第七步:查看結果集,運行結果如下:

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章