算法小白的第一次嘗試---PCA(主成分分析)降維【適合各種緯度數據】

-------------------------------------------------------------------------------------
筆者追求算法實現,不喜歡大篇幅敘述原理,有關PCA理論推薦查看該篇博客

http://www.cnblogs.com/pinard/p/6239403.html

特此說明:發現之前寫的不是很好,特地從UCI上找了Wine葡萄酒的例子重新寫了一次。

LDA降維歡迎前往筆者下一篇博客:https://blog.csdn.net/Java_Man_China/article/details/89504514

KPCA降維歡迎前往筆者下一篇博客:
https://blog.csdn.net/Java_Man_China/article/details/89677371
-------------------------------------------------------------------------------------
import breeze.linalg.{DenseMatrix, eig}
import org.apache.log4j.{Level, Logger}
import org.apache.spark.ml.feature.{LabeledPoint,StandardScaler, VectorAssembler}
import org.apache.spark.ml.linalg.Vectors
import org.apache.spark.sql.types.{DoubleType, StructField, StructType}
import org.apache.spark.sql.{DataFrame, Row, SparkSession}
import scala.collection.mutable.ArrayBuffer

/** The method attempts to lower the dimensionality based on PCA
  * Data Source:http://archive.ics.uci.edu/ml/datasets/Wine
  * @author XiaoTangBao
  * @date 2019/4/16 9:16
  * @version 1.0
  */
object PCA {
  def main(args: Array[String]): Unit = {
    //屏蔽日誌
    Logger.getLogger("org.apache.spark").setLevel(Level.ERROR)
    //spark初始化
    val spark = SparkSession.builder().master("local[4]").appName("LDA").getOrCreate()
    //獲取數據源   http://archive.ics.uci.edu/ml/datasets/Wine
    val data = spark.sparkContext.textFile("G:\\mldata\\wine.data").map(line => line.split(","))
      .map(arr => arr.map(str => str.toDouble)).map(arr =>Row(arr(0),arr(1),arr(2),arr(3),arr(4),arr(5),
      arr(6),arr(7),arr(8),arr(9),arr(10),arr(11),arr(12),arr(13)))

    //設置featuresArr和schema,便於後期數據轉化及生成dataFrame
    val featuresArr = Array("Alcohol","Malic acid","Ash","Alcalinity of ash","Magnesium",
      "Total phenols","Flavanoids","Nonflavanoid phenols","Proanthocyanins","Color intensity",
      "Hue","OD280/OD315 of diluted wines","Proline")
    val schema = StructType(List(StructField("label",DoubleType,true),StructField("Alcohol",DoubleType,true),StructField("Malic acid",DoubleType,true),
      StructField("Ash",DoubleType,true),StructField("Alcalinity of ash",DoubleType,true),StructField("Magnesium",DoubleType,true)
      ,StructField("Total phenols",DoubleType,true),StructField("Flavanoids",DoubleType,true),StructField("Nonflavanoid phenols",DoubleType,true)
      ,StructField("Proanthocyanins",DoubleType,true),StructField("Color intensity",DoubleType,true),StructField("Hue",DoubleType,true)
      ,StructField("OD280/OD315 of diluted wines",DoubleType,true),StructField("Proline",DoubleType,true)))
    val oridf = spark.createDataFrame(data,schema)

    //設置轉化器
    val vectorAsb = new VectorAssembler().setInputCols(featuresArr).setOutputCol("features")
    //數據整理後傳入run,啓動PCA算法
    val newdf = vectorAsb.transform(oridf).select("label","features")
    //標準化處理數據,標準化後不再需要去中心化
    val std = new StandardScaler().setInputCol("features").setOutputCol("Scaledfeatures")
      .setWithMean(true).setWithStd(true).fit(newdf).transform(newdf)
      .select("label","Scaledfeatures")
      .withColumnRenamed("Scaledfeatures","features")

    val result = run(std,2)
    val arr = ArrayBuffer[(Double,Double)]()
    for(i<-0 until result.cols) arr.append((result(0,i),result(1,i)))
    arr.foreach(tp =>println(tp._1))
    println()
    arr.foreach(tp =>println(tp._2))
  }

  /**
    * the method attempts to lower the dimensionality
    * @param data the ioriginal data which in high dimensions, include label and features cols
    * @param k the final dimensions
    */
  def run(df:DataFrame,k:Int)={
    val trainData = df.select("features").rdd.map(row => row.toString())
      .map(str => str.replace('[',' '))
      .map(str => str.replace(']',' '))
      .map(str => str.trim).map(str => str.split(','))
      .map(arr => arr.map(str => str.toDouble)).collect()

    val labels = df.select("label").rdd.map(row => row.toString())
      .map(str => str.replace('[',' '))
      .map(str => str.replace(']',' '))
      .map(str => str.trim).map(str => str.toDouble).collect()

    //特徵列數
    val tz = trainData(0).length
    //生成新的帶label的數據
    val labArr = ArrayBuffer[LabeledPoint]()
    for(i<-0 until trainData.length) labArr.append(LabeledPoint(labels(i),Vectors.dense(trainData(i))))
    //總樣本組成的大型矩陣
    val allData = labArr.map(lab => lab.features).map(vec => vec.toArray).flatMap(x => x).toArray
    val big_Matrx =new DenseMatrix[Double](tz,trainData.length,allData)
    
//    //存放向量各維度的均值
//    val big_mean = sum(big_Matrx,Axis._1).*= (1.0 / big_Matrx.cols)
//    //樣本中心化
//    for(i<-0 until big_Matrx.cols) big_Matrx(::,i) := big_Matrx(::,i) - big_mean

    //計算樣本的協方差矩陣
    val covMatrix = (big_Matrx * big_Matrx.t) * (1.0 / (big_Matrx.cols-1))
    val eigValues = eig(covMatrix).eigenvalues
    val eigVectors = eig(covMatrix).eigenvectors

    //選取最大的k個特徵值對應的特徵向量
    val label_eig = DenseMatrix.horzcat(eigVectors.t,eigValues.toDenseMatrix.t)
    var strArr = ArrayBuffer[String]()
    for(i<-0 until label_eig.rows) strArr.append(label_eig.t(::,i).toString)
    for(i<-0 until strArr.length){
       strArr(i) = strArr(i).replace("DenseVector(","").replace(')',' ').trim()
    }
    val da = ArrayBuffer[LabeledPoint]()
    for(str <- strArr){
      val arr = str.split(',').map(string => string.toDouble)
      val lab = arr.takeRight(1)(0)
      val value = arr.take(arr.length -1)
      val labPoint = LabeledPoint(lab,Vectors.dense(value))
      da.append(labPoint)
    }

    val result = da.sortBy(labPoint => labPoint.label).reverse.take(k).map(lab => lab.features).map(vec => vec.toArray)
    var rt = DenseMatrix.zeros[Double](result.length,result(0).length)
    for(i<-0 until rt.rows){
      for(j<-0 until rt.cols){
        rt(i,j) = result(i)(j)
      }
    }

    //降維後的數據集
    val newData = rt * big_Matrx
    newData
  }
}

Wine 葡萄酒原始數據共13個特徵,選取其中兩列繪製二維圖
Java_Man_China
數據未標準化(僅去中心化),採用PCA算法降至二維後的結果如下:
Java_Man_China
此時發現降維後幾乎看不到效果,數據依舊混亂(無法區分)

下圖爲調用Python包實現PCA的結果,依舊沒有對數據進行標準化

--------Python調包--------from sklearn.decomposition import PCA------------------

對比上面兩幅圖發現,似乎降維後兩者結果不一樣,這是由於Python調包求解的特徵向量與筆者自己計算的特徵向量不一致導致的,同一矩陣,相同的特徵值下有n個不同的特徵向量。爲了擬合python結果,筆者將求出的某個特徵向量乘以-1後,結果顯示如圖:
Java_Man_China
此時大家應該發現兩者是一致的了。

數據標準化後,PCA降至二維的結果如下圖所示:(對結果乘以了-1,方便對比Python庫)
Java_Man_China
此時發現數據終於可分了。(乾一杯葡萄酒)
下圖爲數據標註化後,調用python庫的結果:(毫無意外的一樣)
在這裏插入圖片描述
實驗結果表明:筆者的算法是正確的。。。。。。。。。。。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章