Spark ML包中的幾種歸一化方法總結

org.apache.spark.ml.feature包中包含了4種不同的歸一化方法:

  • Normalizer
  • StandardScaler
  • MinMaxScaler
  • MaxAbsScaler

有時感覺會容易混淆,藉助官方文檔和實際數據的變換,在這裏做一次總結。

原文地址:http://www.neilron.xyz/spark-ml-feature-scaler/

0 數據準備

import org.apache.spark.ml.linalg.Vectors

val dataFrame = spark.createDataFrame(Seq(
  (0, Vectors.dense(1.0, 0.5, -1.0)),
  (1, Vectors.dense(2.0, 1.0, 1.0)),
  (2, Vectors.dense(4.0, 10.0, 2.0))
)).toDF("id", "features")

dataFrame.show

// 原始數據
+---+--------------+
| id|      features|
+---+--------------+
|  0|[1.0,0.5,-1.0]|
|  1| [2.0,1.0,1.0]|
|  2|[4.0,10.0,2.0]|
+---+--------------+

1 Normalizer

Normalizer的作用範圍是每一行,使每一個行向量的範數變換爲一個單位範數,下面的示例代碼都來自spark官方文檔加上少量改寫和註釋。

import org.apache.spark.ml.feature.Normalizer

// 正則化每個向量到1階範數
val normalizer = new Normalizer()
  .setInputCol("features")
  .setOutputCol("normFeatures")
  .setP(1.0)

val l1NormData = normalizer.transform(dataFrame)
println("Normalized using L^1 norm")
l1NormData.show()

// 將每一行的規整爲1階範數爲1的向量,1階範數即所有值絕對值之和。
+---+--------------+------------------+
| id|      features|      normFeatures|
+---+--------------+------------------+
|  0|[1.0,0.5,-1.0]|    [0.4,0.2,-0.4]|
|  1| [2.0,1.0,1.0]|   [0.5,0.25,0.25]|
|  2|[4.0,10.0,2.0]|[0.25,0.625,0.125]|
+---+--------------+------------------+

// 正則化每個向量到無窮階範數
val lInfNormData = normalizer.transform(dataFrame, normalizer.p -> Double.PositiveInfinity)
println("Normalized using L^inf norm")
lInfNormData.show()

// 向量的無窮階範數即向量中所有值中的最大值
+---+--------------+--------------+
| id|      features|  normFeatures|
+---+--------------+--------------+
|  0|[1.0,0.5,-1.0]|[1.0,0.5,-1.0]|
|  1| [2.0,1.0,1.0]| [1.0,0.5,0.5]|
|  2|[4.0,10.0,2.0]| [0.4,1.0,0.2]|
+---+--------------+--------------+

2 StandardScaler

StandardScaler處理的對象是每一列,也就是每一維特徵,將特徵標準化爲單位標準差或是0均值,或是0均值單位標準差。
主要有兩個參數可以設置:
- withStd: 默認爲真。將數據標準化到單位標準差。
- withMean: 默認爲假。是否變換爲0均值。

StandardScaler需要fit數據,獲取每一維的均值和標準差,來縮放每一維特徵。

import org.apache.spark.ml.feature.StandardScaler

val scaler = new StandardScaler()
  .setInputCol("features")
  .setOutputCol("scaledFeatures")
  .setWithStd(true)
  .setWithMean(false)

// Compute summary statistics by fitting the StandardScaler.
val scalerModel = scaler.fit(dataFrame)

// Normalize each feature to have unit standard deviation.
val scaledData = scalerModel.transform(dataFrame)
scaledData.show

// 將每一列的標準差縮放到1。
+---+--------------+------------------------------------------------------------+
|id |features      |scaledFeatures                                              |
+---+--------------+------------------------------------------------------------+
|0  |[1.0,0.5,-1.0]|[0.6546536707079772,0.09352195295828244,-0.6546536707079771]|
|1  |[2.0,1.0,1.0] |[1.3093073414159544,0.1870439059165649,0.6546536707079771]  |
|2  |[4.0,10.0,2.0]|[2.618614682831909,1.870439059165649,1.3093073414159542]    |
+---+--------------+------------------------------------------------------------+

3 MinMaxScaler

MinMaxScaler作用同樣是每一列,即每一維特徵。將每一維特徵線性地映射到指定的區間,通常是[0, 1]。
它也有兩個參數可以設置:
- min: 默認爲0。指定區間的下限。
- max: 默認爲1。指定區間的上限。

import org.apache.spark.ml.feature.MinMaxScaler

val scaler = new MinMaxScaler()
  .setInputCol("features")
  .setOutputCol("scaledFeatures")

// Compute summary statistics and generate MinMaxScalerModel
val scalerModel = scaler.fit(dataFrame)

// rescale each feature to range [min, max].
val scaledData = scalerModel.transform(dataFrame)
println(s"Features scaled to range: [${scaler.getMin}, ${scaler.getMax}]")
scaledData.select("features", "scaledFeatures").show

// 每維特徵線性地映射,最小值映射到0,最大值映射到1。
+--------------+-----------------------------------------------------------+
|features      |scaledFeatures                                             |
+--------------+-----------------------------------------------------------+
|[1.0,0.5,-1.0]|[0.0,0.0,0.0]                                              |
|[2.0,1.0,1.0] |[0.3333333333333333,0.05263157894736842,0.6666666666666666]|
|[4.0,10.0,2.0]|[1.0,1.0,1.0]                                              |
+--------------+-----------------------------------------------------------+

4 MaxAbsScaler

MaxAbsScaler將每一維的特徵變換到[-1, 1]閉區間上,通過除以每一維特徵上的最大的絕對值,它不會平移整個分佈,也不會破壞原來每一個特徵向量的稀疏性。

import org.apache.spark.ml.feature.MaxAbsScaler

val scaler = new MaxAbsScaler()
  .setInputCol("features")
  .setOutputCol("scaledFeatures")

// Compute summary statistics and generate MaxAbsScalerModel
val scalerModel = scaler.fit(dataFrame)

// rescale each feature to range [-1, 1]
val scaledData = scalerModel.transform(dataFrame)
scaledData.select("features", "scaledFeatures").show()

// 每一維的絕對值的最大值爲[4, 10, 2]
+--------------+----------------+                                               
|      features|  scaledFeatures|
+--------------+----------------+
|[1.0,0.5,-1.0]|[0.25,0.05,-0.5]|
| [2.0,1.0,1.0]|   [0.5,0.1,0.5]|
|[4.0,10.0,2.0]|   [1.0,1.0,1.0]|
+--------------+----------------+

總結

所有4種歸一化方法都是線性的變換,當某一維特徵上具有非線性的分佈時,還需要配合其它的特徵預處理方法。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章