spark-BigDl:深度學習之lenet5

一、lenet模型訓練和測試

(一)把linux 本地圖片轉換成sequenceFile,並上傳到HDFS上存儲。

1.相關運行程序爲:kingpoint.utils.ImageToSeqFile

2.首先把數據上傳到linux本地上。數據文件夾格式爲:dlDataImage/圖片類別/圖片名稱

比如手寫識別體,共有十個類別,則分爲十個文件夾存儲,每個文件夾內存放相應的圖片

(1)圖片類別


(2)圖片



3.程序:

(1)ImageToSeqFile

package kingpoint.utils

import java.nio.file.{Files, Paths}
import com.intel.analytics.bigdl.dataset.DataSet
import com.intel.analytics.bigdl.dataset.image.{BGRImgToLocalSeqFile, LocalImgReaderWithName}

/**
 * 在linux本地上存了jpg圖片,把圖片形式讀取成seq文件格式存到HDFS上
 * 注意:圖像要存成“dir/.jpg”,其中dir爲該圖像的類別,一個類別一個文件夾
 * Created by llq on 2017/6/8.
 */
object ImageToSeqFile {

  /**
   * 批量處理image轉換成SeqFile
   * @param blockSize
   * @param hdfsSavePath
   * @param hdfsSeqFile
   * @param dir
   * @param imageHigh
   * @param imageWidth
   */
  def toSeqFile(blockSize:Int,hdfsSavePath:String,hdfsSeqFile:String,dir:String,imageHigh:Int,imageWidth:Int): Unit ={
    // Process image data
    val validationFolderPath = Paths.get(dir)
    require(Files.isDirectory(validationFolderPath),
      s"${validationFolderPath} is not valid")

    val validationDataSet = DataSet.ImageFolder.paths(validationFolderPath)

    validationDataSet.shuffle()
    val iter = validationDataSet.data(train = false)
    (0 until 1).map(tid => {
      val workingThread = new Thread(new Runnable {
        override def run(): Unit = {
          val imageIter =LocalImgReaderWithName(imageHigh, imageWidth, 255f)(iter)

          val fileIter = BGRImgToLocalSeqFile(blockSize, Paths.get(hdfsSavePath,
            hdfsSeqFile), true)(imageIter)

          while (fileIter.hasNext) {
            println(s"Generated file ${fileIter.next()}")
          }
        }
      })
      workingThread.setDaemon(false)
      workingThread.start()
      workingThread
    }).foreach(_.join())

  }

  def main(args: Array[String]) {

    /**
     * 參數設置
     */
    if(args.length<6){
      System.err.println("Error:the parameter is less than 6")
      System.exit(1)
    }

    //讀取linux上存放圖片的目錄名("/root/data/dlDataImage/")
    val linuxPath=args(0)
    //how many images each sequence file contains(12800)
    val blockSize: Int =args(1).toInt
    //保存Seq的HDFS路徑("/user/root/dlData/")
    val hdfsSavePath=args(2)
    //保存Seq的名字("imagenet-seq")
    val hdfsSeqFile=args(3)
    //圖片高度(28)
    val imageHigh=args(4).toInt
    //圖片寬度(28)
    val imageWidth=args(5).toInt

    //把image轉換成SeqFile,並存到HDFS上
    println("Process image data...")
    toSeqFile(blockSize,hdfsSavePath,hdfsSeqFile,linuxPath,imageHigh,imageWidth)
    println("Done")

  }

}

4.執行命令:

spark-submit \
--master local[4] \
--driver-class-path /root/data/dlLibs/lib/bigdl-0.1.0-jar-with-dependencies.jar \
--class "kingpoint.utils.ImageToSeqFile" /root/data/SparkBigDL.jar \
/root/data/dlDataImage/train/ \
12800 \
/user/root/dlData/train/ \
imagenet-seq \
28 28

(1)讀取linux上存放圖片的目錄名:/root/data/dlDataImage/train/

(2)一個sequence文件最多可以包含多少個圖片:12800

(3)保存sequence文件的HDFS路徑:/user/root/dlData/train/

(4)保存sequence文件的名字:imagenet-seq

(5)每張圖片高度:28

(6)每張圖片寬度:28


5.最後保存在HDFS上的Sequence文件是一個包含了多個圖片信息(圖片label,像素點值,圖片名稱)的,如果超過設定參數(2)的值,則會生成第二個Sequence文件。第一個Sequence文件編號爲0,第二個Sequence文件編號爲1,以此類推。



6.保存在HDFS上的照片信息之中是把一個像素點映射成了3個像素點(RBG),所以重新讀取像素點時寬度變爲原來的3.



(二)讀取HDFS上的sequenceFile文件,並訓練lenet5模型。

1.運行程序爲:kingpoint.lenet5.LenetTrain

2.按照第二步所示,形成數據集,並存到HDFS

3.程序:

(1)LeNet5

package kingpoint.lenet5

import com.intel.analytics.bigdl._
import com.intel.analytics.bigdl.nn._
import com.intel.analytics.bigdl.numeric.NumericFloat

/**
 * Lenet5模型
 * Created by llq on 2017/6/13.
 */
object LeNet5 {



  /**
   * 自定義層數參數設置
   * @param input
   * @param c1
   * @param s2
   * @param c3
   * @param s4
   * @param c5
   * @param f6
   * @param output
   * @return
   */
  def apply(input: String,c1: String,s2:String,c3:String,s4:String,c5:String,f6:String,output:String): Module[Float] = {
    val inputImage=input.split(",").map(_.toInt)
    val c1Image=c1.split(",").map(_.toInt)
    val s2Image=s2.split(",").map(_.toInt)
    val c3Image=c3.split(",").map(_.toInt)
    val s4Image=s4.split(",").map(_.toInt)
    val c5Image=c5.toInt
    val f6Image=f6.split(",").map(_.toInt)
    val outputImage=output.split(",").map(_.toInt)

    val model = Sequential()
    model.add(Reshape(Array(inputImage:_*)))
      //C1層:輸入1張圖像,6個輸出feature maps;卷積核爲5*5
      .add(SpatialConvolution(c1Image(0), c1Image(1), c1Image(2), (3)).setName("conv1_5x5"))
      //激活函數
      .add(Tanh())
       //S2層:pooling層,圖像長和寬減半(kW, kH, dW, dH);(kernel width,kernel height,step size in width,step size in height)
      .add(SpatialMaxPooling(s2Image(0), s2Image(1), s2Image(2), s2Image(3)))
      .add(Tanh())
      //C3層(12個feature map)
      .add(SpatialConvolution(c3Image(0), c3Image(1), c3Image(2), c3Image(3)).setName("conv2_5x5"))
      //S4層
      .add(SpatialMaxPooling(s4Image(0), s4Image(1), s4Image(2), s4Image(3)))
      //C5層
      .add(Reshape(Array(c5Image)))
      //F6層
      .add(Linear(f6Image(0), f6Image(1)).setName("fc1"))
      .add(Tanh())
      //OUTPUT層
      .add(Linear(outputImage(0), outputImage(1)).setName("fc2"))
      .add(LogSoftMax())
  }

  /**
   * 手寫識別體Mnist的訓練層參數設置
   * @param classNum
   * @return
   */
  def apply(classNum: Int): Module[Float] = {
    val model = Sequential()
    model.add(Reshape(Array(1, 28, 28*3)))
      //C1層:輸入1張圖像,6個輸出feature maps;卷積核爲5*5
      .add(SpatialConvolution(1, 6, 5, 5).setName("conv1_5x5"))
      //激活函數
      .add(Tanh())
       //S2層:pooling層,圖像長和寬減半(kW, kH, dW, dH);(kernel width,kernel height,step size in width,step size in height)
      .add(SpatialMaxPooling(2, 2, 2, 2))
      .add(Tanh())
      //C3層(12個feature map)
      .add(SpatialConvolution(6, 12, 5, 5).setName("conv2_5x5"))
      //S4層
      .add(SpatialMaxPooling(2, 2, 2, 2))
      //C5層
      .add(Reshape(Array(12 * 4 * 18)))
      //F6層
      .add(Linear(12 * 4 * 18, 100).setName("fc1"))
      .add(Tanh())
      //OUTPUT層
      .add(Linear(100, classNum).setName("fc2"))
      .add(LogSoftMax())
  }
}

(2)LenetTrain
package kingpoint.lenet5

import java.io.File

import com.intel.analytics.bigdl._
import com.intel.analytics.bigdl.dataset.DataSet.SeqFileFolder
import com.intel.analytics.bigdl.dataset.image._
import com.intel.analytics.bigdl.dataset.{ByteRecord, DataSet}
import com.intel.analytics.bigdl.nn.ClassNLLCriterion
import com.intel.analytics.bigdl.optim._
import com.intel.analytics.bigdl.utils.{Engine, LoggerFilter, T}
import org.apache.hadoop.io.Text
import org.apache.log4j.{Level, Logger}
import org.apache.spark.SparkContext
import org.apache.spark.rdd.RDD
import org.apache.spark.sql.{SaveMode}
import org.apache.spark.sql.hive.HiveContext

import scala.collection.mutable.ArrayBuffer


/**
 * 存放圖片信息:label+data+fileName
 * @param label
 * @param data
 * @param imageName
 */
case class LabeledDataFileName(label:Float,data:Array[Byte],imageName:String)

/**
 * 存放模型路徑和準確率
 * @param modelName
 * @param accuary
 */
case class modelNameAccuary(modelName:String,accuary:String)

/**
 * 從HDFS上讀取圖片文件Seq
 * Created by llq on 2017/6/6.
 */
object LenetTrain {
  LoggerFilter.redirectSparkInfoLogs()
  Logger.getLogger("com.intel.analytics.bigdl.optim").setLevel(Level.INFO)

  val testMean = 0.13251460696903547
  val testStd = 0.31048024

  /**
   * 讀取SeqFile的信息,形成LabeledFileName
   * @param url
   * @param sc
   * @return
   */
  def imagesLoadSeq(url: String, sc: SparkContext): RDD[LabeledDataFileName] = {
    sc.sequenceFile(url, classOf[Text], classOf[Text]).map(image => {
      LabeledDataFileName(SeqFileFolder.readLabel(image._1).toInt,
        image._2.copyBytes(),
        SeqFileFolder.readName(image._1))
    })
  }

  /**
   * 讀取圖片信息,形成Array[ByteRecord]
   * @param imagesByteRdd
   * @return
   */
  def inLoad(imagesByteRdd:RDD[LabeledDataFileName]): RDD[ByteRecord]={

    imagesByteRdd.mapPartitions(iter=>
      iter.map{labeledDataFileName=>
        var img=new ArrayBuffer[Byte]()
        img ++= labeledDataFileName.data
        img.remove(0,8)
        ByteRecord(img.toArray,labeledDataFileName.label)
      })
  }

  /**
   * 遍歷model保存路徑,提取最後一次迭代的結果
   * @param file
   */
  def lsLinuxCheckPointPath(file:File): String ={
    val modelPattern="model".r
    val numberPattern="[0-9]+".r
    var epcho=0
    if(file.isDirectory){
      val fileArray=file.listFiles()
      for(i<- 0 to fileArray.length-1){
        //識別出model
        if(modelPattern.findFirstIn(fileArray(i).getName).mkString(",")!=""){
          //取出最大一次的迭代值
          val epchoNumber=numberPattern.findFirstIn(fileArray(i).getName).mkString(",").toInt
          if(epchoNumber>epcho){
            epcho=epchoNumber
          }
        }
      }
    }else{
      throw new Exception("the path is not right")
    }
    "model."+epcho
  }

  /**
   * 主方法,讀取SeqFile,並訓練lenet5模型
   * @param args
   */
  def main (args: Array[String]){
    val conf = Engine.createSparkConf()
      .setAppName("kingpoint.lenet5.LenetTrain")
    val sc = new SparkContext(conf)
    val hiveContext=new HiveContext(sc)
    Engine.init

    /**
     * 參數設置
     */
    if(args.length<18){
      System.err.println("Error:the parameter is less than 18")
      System.exit(1)
    }
    //Hdfs上存放圖片文件的路徑(hdfs://hadoop-01.com:8020/user/root/dlData/train/)
    val hdfsPath=args(0)
    //設置分割數據集的比例:訓練集和驗證集比例(7,3)
    val trainValidationRatio=args(1)
    //圖片高度(28)
    val imageHigh=args(2).toInt
    //圖片寬度(28*3)
    val imageWidth=args(3).toInt

    //lenet模型參數
    val input=args(4)         //輸入層(one image+image high+image width)(1,28,84)
    val c1=args(5)            //c1層:(輸入1張圖像,輸出6個feature map,卷積核爲5*5)(1,6,5,5)
    val s2=args(6)            //S2層:pooling層:(kernel width,kernel height,step size in width,step size in height)(2,2,2,2)
    val c3=args(7)            //C3層:(輸入6張圖像,輸出12個feature map,卷積核爲5*5)(6,12,5,5)
    val s4=args(8)            //S4層:pooling層:(kernel width,kernel height,step size in width,step size in height)(2,2,2,2)
    val c5=args(9)            //C5層(12 * 4 * 18)(864)
    val f6=args(10)           //F6層(12 * 4 * 18,100)(864,100)
    val output=args(11)       //OUTPUT層(輸入100個神經元,輸出10個神經元:分類類別)(100,10)
    val learningRate=args(12).toDouble      //學習率(0.01)
    val learningRateDecay=args(13).toDouble //(0.0)
    val maxEpoch=args(14).toInt             //設置最大Epoch值爲多少之後停止。(1)
    val batchSize=args(15).toInt            //batch size(4)
    val modelSave=args(16)                  //模型保存路徑(/root/data/model)
    val outputTableName=args(17)            //模型訓練後參數在hive中保存的名稱(dl.lenet_train)

    /**
     * 讀取數據,並轉換數據
     */
    //讀出圖片的label+data+filename=>RDD[LabeledDataFileName]
    val imagesByteRdd=imagesLoadSeq(hdfsPath,sc).coalesce(32, true)

    //分割測試集和驗證集
    val trainRatio=trainValidationRatio.split(",")(0).toInt
    val validataionRatio=trainValidationRatio.split(",")(1).toInt
    val imagesByteSplitRdd=imagesByteRdd.randomSplit(Array(trainRatio,validataionRatio))
    val trainSplitRdd=imagesByteSplitRdd(0)
    val validationSplitRdd=imagesByteSplitRdd(1)

    //測試集,轉換爲灰度圖->正則化->Batch(把數據分成多少個batch,相當於分組,一組進行權值更新)
    val trainSet = DataSet.rdd(inLoad(trainSplitRdd)) ->
      BytesToGreyImg(imageHigh, imageWidth) -> GreyImgNormalizer(testMean, testStd) -> GreyImgToBatch(batchSize)
    val validationSet = DataSet.rdd(inLoad(validationSplitRdd)) ->
      BytesToGreyImg(imageHigh, imageWidth) -> GreyImgNormalizer(testMean, testStd) -> GreyImgToBatch(batchSize)

    /**
     * 模型參數設置和訓練
     */
    //建立lenet5模型,並且設置相應的參數
    val model = LeNet5(input,c1,s2,c3,s4,c5,f6,output)

    //設置學習率(梯度下降的時候用到)
    val state =
      T(
        "learningRate" -> learningRate,
        "learningRateDecay" -> learningRateDecay
      )

    //模型參數設置;訓練集;根據輸出誤差更新權重
    val optimizer = Optimizer(model = model, dataset = trainSet,criterion = new ClassNLLCriterion[Float]())

    optimizer.setCheckpoint(modelSave, Trigger.everyEpoch)

    //開始訓練模型:設置驗證集;學習率;設置迭代次數;開始訓練觸發
    optimizer
      .setValidation(
        trigger = Trigger.everyEpoch,
        dataset = validationSet,
        vMethods = Array(new Top1Accuracy, new Top5Accuracy[Float], new Loss[Float]))
      .setState(state)
      .setEndWhen(Trigger.maxEpoch(maxEpoch))    //設置最大Epoch值爲多少之後停止。
      .optimize()

    //遍歷model名稱,取出最後一次迭代的model名字。再合併成全路徑
    val modelEpochFile=optimizer.getCheckpointPath().get+"/"+lsLinuxCheckPointPath(new File(optimizer.getCheckpointPath().get))

    //獲得準確率
    val validator = Validator(model, validationSet)
    val result = validator.test(Array(new Top1Accuracy[Float]))

    /**
     * 模型路徑和準確率存放
     */
    val modelNameAccuaryRdd=sc.parallelize(List(modelNameAccuary(modelEpochFile,result(0)._1.toString)))
    val modelNameAccuaryDf=hiveContext.createDataFrame(modelNameAccuaryRdd)

    //保存到hive中
    modelNameAccuaryDf.show()
    modelNameAccuaryDf.write.mode(SaveMode.Overwrite).saveAsTable(outputTableName)
  }
}

4.執行命令

spark-submit \
--master local[4] \
--driver-memory 2g \
--executor-memory 2g \
--driver-class-path /root/data/dlLibs/lib/bigdl-0.1.0-jar-with-dependencies.jar \
--class "kingpoint.lenet5.LenetTrain" /root/data/SparkBigDL.jar \
hdfs://hadoop-01.com:8020/user/root/dlData/train/ \
7,3 \
28 84 \
1,28,84 \
1,6,5,5 \
2,2,2,2 \
6,12,5,5 \
2,2,2,2 \
864 \
864,100 \
100,10 \
0.01 \
0.0 \
1 \
4 \
/root/data/model \
dl.lenet_train

(1)Hdfs上存放圖片文件的路徑:hdfs://hadoop-01.com:8020/user/root/dlData/train/

(2)設置分割數據集的比例:訓練集和驗證集比例,格式爲:7,3

(3)圖片高度:28

(4)圖片寬度:84

(5)輸入層(one image+image high+image width)1,28,84

(6)c1層:(輸入1張圖像,輸出6feature map,卷積核爲5*5):1,6,5,5

(7)S2層:pooling:kernel widthkernel heightstep size in widthstep size in height):2,2,2,2

(8)C3層:(輸入6張圖像,輸出12feature map,卷積核爲5*5):6,12,5,5

(9)S4層:pooling:kernel widthkernel heightstep size in widthstep size in height):2,2,2,2

(10)C5層(12 * 4 * 18):864

(11)F6層(12 * 4 * 18100):864,100

(12)OUTPUT層(輸入100個神經元,輸出10個神經元:分類類別):100,10

(13)學習率(0.01)

(14)learningRateDecay0.0

(15)設置最大Epoch值爲多少之後停止:1

(16)batch size4

(17)模型保存路徑:/root/data/model

(18)模型訓練後參數在hive中保存的名稱:dl.lenet_train


5.輸出結果

保存在hive裏面,輸出字段爲:模型保存路徑(modelName+驗證集的準確率(accuary

如下圖所示:(注意,當需要測試模型時,需要查看modelName的值,並把這個值作爲參數填寫到測試模型時的參數當中)


(三)利用測試集測試訓練好的lenet5模型。

1.運行程序爲:kingpoint.lenet5.LenetTest

2.按照第三步所示,訓練好模型,並保存到linux

3.程序

(1)ToByteRecords

package kingpoint.image

/**
 * 轉換Row=》ByteRecord
 * Created by llq on 2017/6/13.
 */
import com.intel.analytics.bigdl.dataset.{ByteRecord, Transformer}
import org.apache.log4j.Logger
import org.apache.spark.sql.Row

import scala.collection.Iterator

object ToByteRecords {
  val logger = Logger.getLogger(getClass)

  def apply(colName: String = "data", label:String= "label"): ToByteRecords = {
    new ToByteRecords(colName,label)
  }
}

/**
 * transform [[Row]] to [[ByteRecord]]
 * @param colName column name
 * @param label label name
 */
class ToByteRecords(colName: String,label:String)
  extends Transformer[Row, ByteRecord] {

  override def apply(prev: Iterator[Row]): Iterator[ByteRecord] = {
    prev.map(
      img => {
        val pixelLength=img.getAs[Array[Byte]](colName).length-8
        val byteData=new Array[Byte](pixelLength)
        for(j<-0 to pixelLength-1){
          byteData(j)=img.getAs[Array[Byte]](colName)(j+8)
        }
        ByteRecord(byteData, img.getAs[Float](label))
      }
    )
  }
}

(2)GreyImgToImageVector

package kingpoint.image

/**
 * grey img to (label,denseVector)
 * Created by llq on 2017/6/13.
 */
import com.intel.analytics.bigdl.dataset.Transformer
import com.intel.analytics.bigdl.dataset.image.LabeledGreyImage
import org.apache.log4j.Logger
import org.apache.spark.mllib.linalg.DenseVector

import scala.collection.Iterator

object GreyImgToImageVector {
  val logger = Logger.getLogger(getClass)

  def apply(): GreyImgToImageVector = {
    new GreyImgToImageVector()
  }
}

/**
 * Convert a Grey image to (label,denseVector) of spark mllib
 */
class GreyImgToImageVector()
  extends Transformer[LabeledGreyImage, (Float,DenseVector)] {

  private var featureData: Array[Float] = null

  override def apply(prev: Iterator[LabeledGreyImage]): Iterator[(Float,DenseVector)] = {
    prev.map(
      img => {
        if (null == featureData) {
          featureData = new Array[Float](img.height() * img.width())
        }
        featureData=img.content
        (img.label(),new DenseVector(featureData.map(_.toDouble)))
      }
    )
  }
}

(3)LenetTest

package kingpoint.lenet5

import com.intel.analytics.bigdl.dataset.DataSet.SeqFileFolder
import com.intel.analytics.bigdl.dataset.Transformer
import com.intel.analytics.bigdl.dataset.image.{BytesToGreyImg, GreyImgNormalizer}
import com.intel.analytics.bigdl.nn.Module
import com.intel.analytics.bigdl.utils.{Engine, LoggerFilter}
import kingpoint.image.{GreyImgToImageVector, ToByteRecords}
import org.apache.hadoop.io.Text
import org.apache.log4j.{Level, Logger}
import org.apache.spark.SparkContext
import org.apache.spark.ml.param.ParamMap
import org.apache.spark.ml.{DLClassifier => SparkDLClassifier}
import org.apache.spark.mllib.linalg.DenseVector
import org.apache.spark.rdd.RDD
import org.apache.spark.sql.hive.HiveContext
import org.apache.spark.sql.{SaveMode, DataFrame, Row}


/**
 * 數據預處理後,在工作流時存放圖片信息:label+data+fileName
 * @param label
 * @param features
 * @param imageName
 */
case class LabeledDataFloatImageName(label:Float,features:DenseVector,imageName:String)

/**
 * 存放模型評估參數:count+accuracy
 * @param count
 * @param accuracy
 */
case class countAccuary(count:Double,accuracy:Double)
/**
 * lenet模型測試
 * Created by llq on 2017/6/13.
 */
object LenetTest {
  LoggerFilter.redirectSparkInfoLogs()
  Logger.getLogger("com.intel.analytics.bigdl.optim").setLevel(Level.INFO)

  val testMean = 0.13251460696903547
  val testStd = 0.31048024

  /**
   * 讀取SeqFile的信息,形成LabeledFileName
   * @param url
   * @param sc
   * @return
   */
  def imagesLoadSeq(url: String, sc: SparkContext): RDD[LabeledDataFileName] = {
    sc.sequenceFile(url, classOf[Text], classOf[Text]).map(image => {
      LabeledDataFileName(SeqFileFolder.readLabel(image._1).toInt,
        image._2.copyBytes(),
        SeqFileFolder.readName(image._1))
    })
  }

  /**
   * 工作流df轉換
   * 合併:label+轉換後的data+imageName
   * @param data
   * @param f
   * @return
   */
  def transformDF(data: DataFrame, f: Transformer[Row, (Float,DenseVector)]): DataFrame = {
    //利用工作流轉換數據,形成RDD[LabeledGreyImage]
    val vectorRdd = data.rdd.mapPartitions(f(_))
    //合併:轉換後的數據+名字+label
    val dataRDD = data.rdd.zipPartitions(vectorRdd) { (a, b) =>
      b.zip(a.map(_.getAs[String]("imageName")))
        .map(
          v => LabeledDataFloatImageName(v._1._1, v._1._2,v._2)
        )
    }
    data.sqlContext.createDataFrame(dataRDD)
  }

  /**
   * 統計準確率
   * @param testResult
   * @return
   */
  def evaluationAccuracy(testResult:DataFrame): countAccuary ={
    //label-predict
    val labelSubPredictArray=testResult.select("label","predict").rdd.map{row=>
      val label=row.getAs[Float]("label")
      val predict=row.getAs[Int]("predict")
      label-predict
    }.collect()

    //統計準確率
    var correct:Double=0.0
    for(i<-0 to labelSubPredictArray.length-1){
      if(labelSubPredictArray(i)==0){
        correct += 1
      }
    }
    val accuary=correct/labelSubPredictArray.length
    countAccuary(labelSubPredictArray.length,accuary)
  }

  def main(args: Array[String]) {
    val conf = Engine.createSparkConf()
      .setAppName("kingpoint.lenet5.LenetTrain")
    val sc = new SparkContext(conf)
    Engine.init
    val hiveContext = new HiveContext(sc)

    /**
     * 參數設置
     */
    if(args.length<7){
      System.err.println("Error:the parameter is less than 7")
      System.exit(1)
    }
    //Hdfs上存放測試圖片文件的路徑(hdfs://hadoop-01.com:8020/user/root/dlData/test/)
    val hdfsPath=args(0)
    //model路徑(/root/data/model/20170615_101109/model.121)
    val modelPath=args(1)
    //batchSize(16)
    val batchSize=args(2).toInt
    //圖片高度(28)
    val imageHigh=args(3).toInt
    //圖片寬度(28*3)
    val imageWidth=args(4).toInt
    //模型測試後參數在hive中保存的名稱(dl.lenet_test)
    val outputTableName=args(5)
    //模型評估參數在hive中保存的名稱(dl.lenet_test_evaluation)
    val outputTableNameEvaluation=args(6)

    //讀出圖片的label+data+filename=>RDD[LabeledDataFileName]
    val imagesByteRdd=imagesLoadSeq(hdfsPath,sc).coalesce(32, true)

    /**
     * 模型導入和測試
     */
    //導入模型
    val model =  Module.load[Float](modelPath)

    val valTrans = new SparkDLClassifier[Float]()
      .setInputCol("features")
      .setOutputCol("predict")

    val paramsTrans = ParamMap(
      valTrans.modelTrain -> model,
      valTrans.batchShape ->
        Array(batchSize, 3, imageHigh, imageWidth/3))

    //數據集預處理
    val transf = ToByteRecords() ->
      BytesToGreyImg(imageHigh, imageWidth) ->
      GreyImgNormalizer(testMean, testStd) ->
      GreyImgToImageVector()

    //形成預測結果DF
    val valDF = transformDF(hiveContext.createDataFrame(imagesByteRdd), transf)
    val testResult=valTrans.transform(valDF, paramsTrans).select("label","imageName","predict")
    testResult.show()

    //準確率,並形成df
    val countAccuracyDf=hiveContext.createDataFrame(sc.parallelize(Seq(evaluationAccuracy(testResult))))
    countAccuracyDf.show()

    /**
     * 結果保存
     */
    //保存到hive中
    testResult.write.mode(SaveMode.Overwrite).saveAsTable(outputTableName)
    countAccuracyDf.write.mode(SaveMode.Overwrite).saveAsTable(outputTableNameEvaluation)
  }


}

4.執行命令:

spark-submit \
--master local[4] \
--driver-memory 2g \
--executor-memory 2g \
--driver-class-path /root/data/dlLibs/lib/bigdl-0.1.0-jar-with-dependencies.jar \
--class "kingpoint.lenet5.LenetTest" /root/data/SparkBigDL.jar \
hdfs://hadoop-01.com:8020/user/root/dlData/test/ \
/root/data/model/20170615_101109/model.121 \
16 \
28 84 \
dl.lenet_test \
dl.lenet_test_evaluation

(1)Hdfs上存放測試圖片文件的路徑:hdfs://hadoop-01.com:8020/user/root/dlData/test/

(2)model路徑:/root/data/model/20170615_101109/model.121

(3)batchSize16

(4)圖片高度:28

(5)圖片寬度:84

(6)模型訓練後參數在hive中保存的名稱:dl.lenet_test

(7)模型評估參數在hive中保存的名稱:dl.lenet_test_evaluation

 

4.保存結果

(1)dl.lenet_test


(2)dl.lenet_test_evaluation





發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章