Spark操作——轉換操作(三)

  • 基礎轉換操作

  • 鍵值轉換操作

 

鍵值轉換操作

  • partitionBy(partitioner: Partitioner):RDD[(K,V)]

將原來的RDD根據給定的Partitioner函數進行重新分區。

scala> val rdd = sc.makeRDD(Array((1,"A"),(2,"B"),(3,"C"),(4,"D")), 2)
rdd: org.apache.spark.rdd.RDD[(Int, String)] = ParallelCollectionRDD[0] at makeRDD at <console>:24

# 查看每個分區的元素
scala> rdd.mapPartitionsWithIndex{
     |     (partIdx, iter) => {
     |         var part_map = scala.collection.mutable.Map[String, List[(Int,String)]]()
     |         while(iter.hasNext){
     |             var part_name = "part_" + partIdx;
     |             var elem = iter.next()
     |             if(part_map.contains(part_name)) {
     |                 var elems = part_map(part_name)
     |                 elems ::= elem
     |                 part_map(part_name) = elems
     |             }
     |             else{
     |                 part_map(part_name) = List[(Int,String)]{elem}
     |             }
     |         }
     |         part_map.iterator
     |     }
     | }.collect
res5: Array[(String, List[(Int, String)])] = Array((part_0,List((2,B), (1,A))), (part_1,List((4,D), (3,C))))

# 使用partitionBy重新分區
scala> var rddNew = rdd.partitionBy(new org.apache.spark.HashPartitioner(2))
rddNew: org.apache.spark.rdd.RDD[(Int, String)] = ShuffledRDD[2] at partitionBy at <console>:25

# 查看新的分區元素
scala> rddNew.mapPartitionsWithIndex{
     |     (partIdx, iter) => {
     |         var part_map = scala.collection.mutable.Map[String, List[(Int,String)]]()
     |         while(iter.hasNext){
     |             var part_name = "part_" + partIdx;
     |             var elem = iter.next()
     |             if(part_map.contains(part_name)) {
     |                 var elems = part_map(part_name)
     |                 elems ::= elem
     |                 part_map(part_name) = elems
     |             }
     |             else{
     |                 part_map(part_name) = List[(Int,String)]{elem}
     |             }
     |         }
     |         part_map.iterator
     |     }
     | }.collect
res6: Array[(String, List[(Int, String)])] = Array((part_0,List((4,D), (2,B))), (part_1,List((3,C), (1,A))))
  • mapValues[U](f: (V) => U): RDD[(K, U)]

與map功能類似,區別在於mapValues操作的是[K,V]中的V值。

scala> val rdd = sc.makeRDD(Array((1,"A"),(2,"B"),(3,"C"),(4,"D")), 2)
rdd: org.apache.spark.rdd.RDD[(Int, String)] = ParallelCollectionRDD[0] at makeRDD at <console>:24

scala> rdd.mapValues(x => x + "_").collect
res7: Array[(Int, String)] = Array((1,A_), (2,B_), (3,C_), (4,D_))
  • flatMapValues[U](f: (V) => TraversableOnce[U]): RDD[(K, U)]

與flatMap功能類似,區別在於flatMapValues操作的[K,V]中的V值。

scala> var rdd =  sc.makeRDD(Array(("A", 1), ("B", 2), ("C", 3), ("D", 4)))
rdd: org.apache.spark.rdd.RDD[(String, Int)] = ParallelCollectionRDD[7] at makeRDD at <console>:24

scala> rdd.flatMapValues(x => x to 5).collect
res9: Array[(String, Int)] = Array((A,1), (A,2), (A,3), (A,4), (A,5), (B,2), (B,3), (B,4), (B,5), (C,3), (C,4), (C,5), (D,4), (D,5))
  • combineByKey[C](createCombine: (V) => C, mergeValue: (C,V) => C, mergeCombiners:(C,C) => C): RDD[(K, C)]

  • combineByKey[C](createCombine: (V) => C, mergeValue: (C,V) => C, mergeCombiners:(C,C) => C, numPartitions: Int): RDD[(K, C)]

  • combineByKey[C](createCombine: (V) => C, mergeValue: (C,V) => C, mergeCombiners:(C,C) => C, partitioner: Partitioner, mapSideCombine: Boolean=true, serializer: Serializer=null): RDD[(K, C)]

combineByKey操作主要用於將[K, V]轉換成[K, C],V類型和C類型可相同可不同。

    createCombine:組合器函數,用於將V類型轉換爲C類型,輸入爲RDD[K, V]中的V,輸出爲C;

    mergeValue:合併值函數,將一個C類型和一個V類型值組合成一個C類型,輸入爲(C, V),輸出C;

    mergeCombiners:合併組合器函數,將兩個C類型的值合併成一個C類型,輸入(C, C),輸出C;

    numPartitons:輸出RDD的分區數,默認和原RDD分區數一致

    partitioner:分區函數,默認爲HashPartitioner;

    mapSideCombine:是否需要在Map端進行Combine操作,類似於MapReduce中的Combine,默認爲true。

scala> var rdd =  sc.makeRDD(Array(("A", 1), ("A", 2), ("A", 3), ("B", 4), ("B", 5), ("C", 6), ("C", 7), ("C", 8), ("C", 9), ("D", 10)))
rdd: org.apache.spark.rdd.RDD[(String, Int)] = ParallelCollectionRDD[13] at makeRDD at <console>:24

scala> rdd.combineByKey(
     |     (v: Int) => v + "_",
     |     (c: String, v: Int) => c + "@" + v,
     |     (c1: String, c2: String) => c1 + "$" + c2
     | ).collect
res12: Array[(String, String)] = Array((D,10_), (A,1_@2$3_), (B,4_@5), (C,6_@7$8_@9))
  • foldByKey(zeroValue: V)(func: (V, V) => V): RDD[K, V]

  • foldByKey(zeroValue: V, numPartions: Int)(func: (V, V) => V): RDD[K, V]

  • foldByKey(zeroValue: V, partitioner: Partitioner)(func: (V, V) => V): RDD[K, V]

foldByKey操作主要用於將RDD[K, V]根據K將V進行摺疊,合併處理

    zeroValue:先根據映射函數將zeroValue應用於V,即進行V的初始化,再將映射函數應用於初始化後的V

    numPartitons:輸出RDD的分區數,默認和原RDD分區數一致

    partitioner:分區函數,默認爲HashPartitioner;

scala> var rdd =  sc.makeRDD(Array(("A", 1), ("A", 2), ("A", 3), ("B", 4), ("B", 5), ("C", 6), ("C", 7), ("C", 8), ("C", 9), ("D", 10)))
rdd: org.apache.spark.rdd.RDD[(String, Int)] = ParallelCollectionRDD[13] at makeRDD at <console>:24

scala> rdd.foldByKey(0)(_+_).collect
res13: Array[(String, Int)] = Array((D,10), (A,6), (B,9), (C,30))

scala> rdd.foldByKey(2)(_+_).collect
res14: Array[(String, Int)] = Array((D,12), (A,10), (B,11), (C,34))

scala> rdd.foldByKey(0)(_*_).collect
res15: Array[(String, Int)] = Array((D,0), (A,0), (B,0), (C,0))

scala> rdd.foldByKey(1)(_*_).collect
res16: Array[(String, Int)] = Array((D,10), (A,6), (B,20), (C,3024))
  • reduceByKey(func: (V, V) => V): RDD[(K, V)]

  • reduceByKey(func: (V, V) => V, numPartitions: Int): RDD[(K, V)]

  • reduceByLey(partitioner: Partitioner, func: (V, V) => V): RDD[(K, V)]

reduceByKey操作用於將RDD[K, V]中每個K對應的V值根據映射函數進行運算

scala> var rdd =  sc.makeRDD(Array(("A", 1), ("B", 2), ("C", 3), ("D", 4), ("E", 5), ("F", 6), ("G", 7), ("H", 8), ("I", 9), ("J", 10)))
rdd: org.apache.spark.rdd.RDD[(String, Int)] = ParallelCollectionRDD[9] at makeRDD at <console>:24

scala> rdd.reduceByKey((x, y) => x+y).collect
res17: Array[(String, Int)] = Array((D,10), (A,6), (B,9), (C,30))

scala> rdd.reduceByKey((x, y) => x*y).collect
res18: Array[(String, Int)] = Array((D,10), (A,6), (B,20), (C,3024))
  • reduceByKeyLocally(func: (V, V) => V): Map[(K, V)]

reduceByKeyLocally操作與reduceByKey類似,區別在於前者返回一個Map[K, V],後者返回RDD[K, V]

scala> var rdd =  sc.makeRDD(Array(("A", 1), ("B", 2), ("C", 3), ("D", 4), ("E", 5), ("F", 6), ("G", 7), ("H", 8), ("I", 9), ("J", 10)))
rdd: org.apache.spark.rdd.RDD[(String, Int)] = ParallelCollectionRDD[9] at makeRDD at <console>:24

scala> rdd.reduceByKeyLocally((x, y) => x+y)
res21: scala.collection.Map[String,Int] = Map(A -> 6, B -> 9, C -> 30, D -> 10)

scala> rdd.reduceByKeyLocally((x, y) => x*y)
res22: scala.collection.Map[String,Int] = Map(A -> 6, B -> 20, C -> 3024, D -> 10)
  • groupByKey(func: (V, V) => V): RDD[(K, V)]

  • groupByKey(): RDD[(K, Iterable[V])]

  • groupByKey(numPartitions: Int): RDD[(K, Iterable[V])]

  • groupByKey(partitioner: Partitioner): RDD[(K, Iterable[V])]

  • groupByKey操作用於將RDD[K, V]中每個K對應的V值合併到一個集合Iterable[V]中

scala> var rdd = sc.makeRDD(Array(("A", 1), ("A", 2), ("A", 3), ("B", 4), ("B", 5), ("C", 6), ("C", 7), ("C", 8), ("C", 9), ("D", 10)))
rdd: org.apache.spark.rdd.RDD[(String, Int)] = ParallelCollectionRDD[24] at makeRDD at <console>:24

scala> rdd.groupByKey.collect
res23: Array[(String, Iterable[Int])] = Array((D,CompactBuffer(10)), (A,CompactBuffer(1, 2, 3)), (B,CompactBuffer(4, 5)), (C,CompactBuffer(6, 7, 8, 9)))

 

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章