[hadoop2.7.1]I/O之序列化(serializer)

先來看下org.apache.hadoop.io.serializer的類圖(hadoop2.7.1):


由類圖看:

接口三個:

1、Deserializer:定義反序列化接口;

2、Serializer:定義序列化接口;
3、Serialization:定義了一系列和序列化相關並相互依賴對象的接口。

依據這三個接口,分別實現了2個類,分別是支持Writable機制的WritableSerialization和支持Java序列化的JavaSerialization,這樣一共是6個實現類。

SerilizationFactory:維護一個Serilization的ArrayList。它具有參數爲Configuration的構造函數,把parameter io.serializations中逗號隔開的serialization都添加進來。

Deserializer:將字節流轉爲一個對象。這個接口的方法有:打開流,反序列化,關閉流

源碼:

package org.apache.hadoop.io.serializer;

import java.io.IOException;
import java.io.InputStream;

import org.apache.hadoop.classification.InterfaceAudience;
import org.apache.hadoop.classification.InterfaceStability;

/**
 * <p>
 * Provides a facility for deserializing objects of type <T> from an
 * {@link InputStream}.
 * </p>
 * 
 * <p>
 * Deserializers are stateful, but must not buffer the input since
 * other producers may read from the input between calls to
 * {@link #deserialize(Object)}.
 * </p>
 * @param <T>
 */
@InterfaceAudience.LimitedPrivate({"HDFS", "MapReduce"})
@InterfaceStability.Evolving
public interface Deserializer<T> {
  /**
   * <p>Prepare the deserializer for reading.</p>
   */
  void open(InputStream in) throws IOException;
  
  /**
   * <p>
   * Deserialize the next object from the underlying input stream.
   * If the object <code>t</code> is non-null then this deserializer
   * <i>may</i> set its internal state to the next object read from the input
   * stream. Otherwise, if the object <code>t</code> is null a new
   * deserialized object will be created.
   * </p>
   * @return the deserialized object
   */
  T deserialize(T t) throws IOException;
  
  /**
   * <p>Close the underlying input stream and clear up any resources.</p>
   */
  void close() throws IOException;
}


Serializer:將一個對象轉換爲一個字節流的實現實例,該接口的方法有:打開流,序列化,關閉流

源碼:

package org.apache.hadoop.io.serializer;

import java.io.IOException;
import java.io.OutputStream;

import org.apache.hadoop.classification.InterfaceAudience;
import org.apache.hadoop.classification.InterfaceStability;

/**
 * <p>
 * Provides a facility for serializing objects of type <T> to an
 * {@link OutputStream}.
 * </p>
 * 
 * <p>
 * Serializers are stateful, but must not buffer the output since
 * other producers may write to the output between calls to
 * {@link #serialize(Object)}.
 * </p>
 * @param <T>
 */
@InterfaceAudience.LimitedPrivate({"HDFS", "MapReduce"})
@InterfaceStability.Evolving
public interface Serializer<T> {
  /**
   * <p>Prepare the serializer for writing.</p>
   */
  void open(OutputStream out) throws IOException;
  
  /**
   * <p>Serialize <code>t</code> to the underlying output stream.</p>
   */
  void serialize(T t) throws IOException;
  
  /**
   * <p>Close the underlying output stream and clear up any resources.</p>
   */  
  void close() throws IOException;
}

Serialization:使用抽象工廠的設計模式,封裝了一對Serializer/Deserializer,判斷是否支持輸入的類,根據輸入的類給出序列化接口和反序列化接口。

源碼:

package org.apache.hadoop.io.serializer;

import org.apache.hadoop.classification.InterfaceAudience;
import org.apache.hadoop.classification.InterfaceStability;

/**
 * <p>
 * Encapsulates a {@link Serializer}/{@link Deserializer} pair.
 * </p>
 * @param <T>
 */
@InterfaceAudience.LimitedPrivate({"HDFS", "MapReduce"})
@InterfaceStability.Evolving
public interface Serialization<T> {
  
  /**
   * Allows clients to test whether this {@link Serialization}
   * supports the given class.
   */
  boolean accept(Class<?> c);
  
  /**
   * @return a {@link Serializer} for the given class.
   */
  Serializer<T> getSerializer(Class<T> c);

  /**
   * @return a {@link Deserializer} for the given class.
   */
  Deserializer<T> getDeserializer(Class<T> c);
}

SerializationFactory :序列化工廠,初始化時從配置項io.serializations中獲取序列化工具,默認使用org.apache.hadoop.io.serializer.WritableSerialization作爲序列化工具。通過調用getSerializer和getDeserializer來獲取序列化與反序列化工具。

源碼:

package org.apache.hadoop.io.serializer;

import java.util.ArrayList;
import java.util.List;

import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.hadoop.classification.InterfaceAudience;
import org.apache.hadoop.classification.InterfaceStability;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.CommonConfigurationKeys;
import org.apache.hadoop.io.serializer.avro.AvroReflectSerialization;
import org.apache.hadoop.io.serializer.avro.AvroSpecificSerialization;
import org.apache.hadoop.util.ReflectionUtils;

/**
 * <p>
 * A factory for {@link Serialization}s.
 * </p>
 */
@InterfaceAudience.LimitedPrivate({"HDFS", "MapReduce"})
@InterfaceStability.Evolving
public class SerializationFactory extends Configured {
  
  private static final Log LOG =
    LogFactory.getLog(SerializationFactory.class.getName());

  private List<Serialization<?>> serializations = new ArrayList<Serialization<?>>();
  
  /**
   * <p>
   * Serializations are found by reading the <code>io.serializations</code>
   * property from <code>conf</code>, which is a comma-delimited list of
   * classnames.
   * </p>
   */
  public SerializationFactory(Configuration conf) {
    super(conf);
    for (String serializerName : conf.getTrimmedStrings(
      CommonConfigurationKeys.IO_SERIALIZATIONS_KEY,
      new String[]{WritableSerialization.class.getName(),
        AvroSpecificSerialization.class.getName(),
        AvroReflectSerialization.class.getName()})) {
      add(conf, serializerName);
    }
  }
  
  @SuppressWarnings("unchecked")
  private void add(Configuration conf, String serializationName) {
    try {
      Class<? extends Serialization> serializionClass =
        (Class<? extends Serialization>) conf.getClassByName(serializationName);
      serializations.add((Serialization)
      ReflectionUtils.newInstance(serializionClass, getConf()));
    } catch (ClassNotFoundException e) {
      LOG.warn("Serialization class not found: ", e);
    }
  }

  public <T> Serializer<T> getSerializer(Class<T> c) {
    Serialization<T> serializer = getSerialization(c);
    if (serializer != null) {
      return serializer.getSerializer(c);
    }
    return null;
  }

  public <T> Deserializer<T> getDeserializer(Class<T> c) {
    Serialization<T> serializer = getSerialization(c);
    if (serializer != null) {
      return serializer.getDeserializer(c);
    }
    return null;
  }

  @SuppressWarnings("unchecked")
  public <T> Serialization<T> getSerialization(Class<T> c) {
    for (Serialization serialization : serializations) {
      if (serialization.accept(c)) {
        return (Serialization<T>) serialization;
      }
    }
    return null;
  }
  
}

下面對SerializationFactory生產Serializations做個簡單的解析說明:

首先來看其構造函數裏的一個全局參數:CommonConfigurationKeys.IO_SERIALIZATIONS_KEY,它的值定義如下:

  /** See <a href="{@docRoot}/../core-default.html">core-default.xml</a> */
  public static final String  IO_SERIALIZATIONS_KEY = "io.serializations";

而使用SerializationFactory的構造函數:publicSerializationFactory(Configurationconf) 時,使用配置文件:Configuration:core-default.xml,core-site.xml。如:
SerializationFactoryfactory=newSerializationFactory(conf);

而在hadoop2.7.1中默認配置文件core-default.xml的io.serializations的屬性如下:

<property>
  <name>io.serializations</name>
  <value>org.apache.hadoop.io.serializer.WritableSerialization,org.apache.hadoop.io.serializer.avro.AvroSpecificSerialization,org.apache.hadoop.io.serializer.avro.AvroReflectSerialization<alue>
  <description>A list of serialization classes that can be used for
  obtaining serializers and deserializers.</description>
</property>

由此,通過SerializationFactory生產的Serializations有三個:
org.apache.hadoop.io.serializer.WritableSerialization,
org.apache.hadoop.io.serializer.avro.AvroSpecificSerialization,
org.apache.hadoop.io.serializer.avro.AvroReflectSerialization

通過其方法public <T> Serializer<T> getSerializer(Class<T> c),public <T> Serialization<T> getSerialization(Class<T> c)便能得到相應的Serialization:

  public <T> Serializer<T> getSerializer(Class<T> c) {
    Serialization<T> serializer = getSerialization(c);
    if (serializer != null) {
      return serializer.getSerializer(c);
    }
    return null;
  }

<div>@SuppressWarnings("unchecked")
public<T>Serialization<T>getSerialization(Class<T>c){
for(Serializationserialization:serializations){
if(serialization.accept(c))<strong></strong>{                          //注1
return(Serialization<T>)serialization;
}
}
returnnull;
}</div>

注1:if (serialization.accept(c))將會調用相應類的accept函數,例如:如果serialization的值爲:org.apache.hadoop.io.serializer.WritableSerialization,則將調用:

  @InterfaceAudience.Private
  @Override
  public boolean accept(Class<?> c) {
    return Writable.class.isAssignableFrom(c);
  }

如果serialization的值爲:org.apache.hadoop.io.serializer.avro.AvroSpecificSerialization,則將調用:

  @InterfaceAudience.Private
  @Override
  public boolean accept(Class<?> c) {
    return SpecificRecord.class.isAssignableFrom(c);          //注2
  }

注2:

public boolean isAssignableFrom(Class<?>cls)

判定此 Class 對象所表示的類或接口與指定的 Class 參數所表示的類或接口是否相同,或是否是其超類或超接口。如果是則返回 true;否則返回 false。如果該 Class 表示一個基本類型,且指定的 Class 參數正是該 Class 對象,則該方法返回 true;否則返回 false

特別地,通過身份轉換或擴展引用轉換,此方法能測試指定 Class 參數所表示的類型能否轉換爲此 Class 對象所表示的類型。有關詳細信息,請參閱 Java Language Specification 的第 5.1.1 和 5.1.4 節。

參數:
cls - 要檢查的 Class 對象
返回:
表明 cls 類型的對象能否賦予此類對象的 boolean
拋出:
NullPointerException - 如果指定的 Class 參數爲 null。








發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章