Spark操作——转换操作(三)

  • 基础转换操作

  • 键值转换操作

 

键值转换操作

  • partitionBy(partitioner: Partitioner):RDD[(K,V)]

将原来的RDD根据给定的Partitioner函数进行重新分区。

scala> val rdd = sc.makeRDD(Array((1,"A"),(2,"B"),(3,"C"),(4,"D")), 2)
rdd: org.apache.spark.rdd.RDD[(Int, String)] = ParallelCollectionRDD[0] at makeRDD at <console>:24

# 查看每个分区的元素
scala> rdd.mapPartitionsWithIndex{
     |     (partIdx, iter) => {
     |         var part_map = scala.collection.mutable.Map[String, List[(Int,String)]]()
     |         while(iter.hasNext){
     |             var part_name = "part_" + partIdx;
     |             var elem = iter.next()
     |             if(part_map.contains(part_name)) {
     |                 var elems = part_map(part_name)
     |                 elems ::= elem
     |                 part_map(part_name) = elems
     |             }
     |             else{
     |                 part_map(part_name) = List[(Int,String)]{elem}
     |             }
     |         }
     |         part_map.iterator
     |     }
     | }.collect
res5: Array[(String, List[(Int, String)])] = Array((part_0,List((2,B), (1,A))), (part_1,List((4,D), (3,C))))

# 使用partitionBy重新分区
scala> var rddNew = rdd.partitionBy(new org.apache.spark.HashPartitioner(2))
rddNew: org.apache.spark.rdd.RDD[(Int, String)] = ShuffledRDD[2] at partitionBy at <console>:25

# 查看新的分区元素
scala> rddNew.mapPartitionsWithIndex{
     |     (partIdx, iter) => {
     |         var part_map = scala.collection.mutable.Map[String, List[(Int,String)]]()
     |         while(iter.hasNext){
     |             var part_name = "part_" + partIdx;
     |             var elem = iter.next()
     |             if(part_map.contains(part_name)) {
     |                 var elems = part_map(part_name)
     |                 elems ::= elem
     |                 part_map(part_name) = elems
     |             }
     |             else{
     |                 part_map(part_name) = List[(Int,String)]{elem}
     |             }
     |         }
     |         part_map.iterator
     |     }
     | }.collect
res6: Array[(String, List[(Int, String)])] = Array((part_0,List((4,D), (2,B))), (part_1,List((3,C), (1,A))))
  • mapValues[U](f: (V) => U): RDD[(K, U)]

与map功能类似,区别在于mapValues操作的是[K,V]中的V值。

scala> val rdd = sc.makeRDD(Array((1,"A"),(2,"B"),(3,"C"),(4,"D")), 2)
rdd: org.apache.spark.rdd.RDD[(Int, String)] = ParallelCollectionRDD[0] at makeRDD at <console>:24

scala> rdd.mapValues(x => x + "_").collect
res7: Array[(Int, String)] = Array((1,A_), (2,B_), (3,C_), (4,D_))
  • flatMapValues[U](f: (V) => TraversableOnce[U]): RDD[(K, U)]

与flatMap功能类似,区别在于flatMapValues操作的[K,V]中的V值。

scala> var rdd =  sc.makeRDD(Array(("A", 1), ("B", 2), ("C", 3), ("D", 4)))
rdd: org.apache.spark.rdd.RDD[(String, Int)] = ParallelCollectionRDD[7] at makeRDD at <console>:24

scala> rdd.flatMapValues(x => x to 5).collect
res9: Array[(String, Int)] = Array((A,1), (A,2), (A,3), (A,4), (A,5), (B,2), (B,3), (B,4), (B,5), (C,3), (C,4), (C,5), (D,4), (D,5))
  • combineByKey[C](createCombine: (V) => C, mergeValue: (C,V) => C, mergeCombiners:(C,C) => C): RDD[(K, C)]

  • combineByKey[C](createCombine: (V) => C, mergeValue: (C,V) => C, mergeCombiners:(C,C) => C, numPartitions: Int): RDD[(K, C)]

  • combineByKey[C](createCombine: (V) => C, mergeValue: (C,V) => C, mergeCombiners:(C,C) => C, partitioner: Partitioner, mapSideCombine: Boolean=true, serializer: Serializer=null): RDD[(K, C)]

combineByKey操作主要用于将[K, V]转换成[K, C],V类型和C类型可相同可不同。

    createCombine:组合器函数,用于将V类型转换为C类型,输入为RDD[K, V]中的V,输出为C;

    mergeValue:合并值函数,将一个C类型和一个V类型值组合成一个C类型,输入为(C, V),输出C;

    mergeCombiners:合并组合器函数,将两个C类型的值合并成一个C类型,输入(C, C),输出C;

    numPartitons:输出RDD的分区数,默认和原RDD分区数一致

    partitioner:分区函数,默认为HashPartitioner;

    mapSideCombine:是否需要在Map端进行Combine操作,类似于MapReduce中的Combine,默认为true。

scala> var rdd =  sc.makeRDD(Array(("A", 1), ("A", 2), ("A", 3), ("B", 4), ("B", 5), ("C", 6), ("C", 7), ("C", 8), ("C", 9), ("D", 10)))
rdd: org.apache.spark.rdd.RDD[(String, Int)] = ParallelCollectionRDD[13] at makeRDD at <console>:24

scala> rdd.combineByKey(
     |     (v: Int) => v + "_",
     |     (c: String, v: Int) => c + "@" + v,
     |     (c1: String, c2: String) => c1 + "$" + c2
     | ).collect
res12: Array[(String, String)] = Array((D,10_), (A,1_@2$3_), (B,4_@5), (C,6_@7$8_@9))
  • foldByKey(zeroValue: V)(func: (V, V) => V): RDD[K, V]

  • foldByKey(zeroValue: V, numPartions: Int)(func: (V, V) => V): RDD[K, V]

  • foldByKey(zeroValue: V, partitioner: Partitioner)(func: (V, V) => V): RDD[K, V]

foldByKey操作主要用于将RDD[K, V]根据K将V进行折叠,合并处理

    zeroValue:先根据映射函数将zeroValue应用于V,即进行V的初始化,再将映射函数应用于初始化后的V

    numPartitons:输出RDD的分区数,默认和原RDD分区数一致

    partitioner:分区函数,默认为HashPartitioner;

scala> var rdd =  sc.makeRDD(Array(("A", 1), ("A", 2), ("A", 3), ("B", 4), ("B", 5), ("C", 6), ("C", 7), ("C", 8), ("C", 9), ("D", 10)))
rdd: org.apache.spark.rdd.RDD[(String, Int)] = ParallelCollectionRDD[13] at makeRDD at <console>:24

scala> rdd.foldByKey(0)(_+_).collect
res13: Array[(String, Int)] = Array((D,10), (A,6), (B,9), (C,30))

scala> rdd.foldByKey(2)(_+_).collect
res14: Array[(String, Int)] = Array((D,12), (A,10), (B,11), (C,34))

scala> rdd.foldByKey(0)(_*_).collect
res15: Array[(String, Int)] = Array((D,0), (A,0), (B,0), (C,0))

scala> rdd.foldByKey(1)(_*_).collect
res16: Array[(String, Int)] = Array((D,10), (A,6), (B,20), (C,3024))
  • reduceByKey(func: (V, V) => V): RDD[(K, V)]

  • reduceByKey(func: (V, V) => V, numPartitions: Int): RDD[(K, V)]

  • reduceByLey(partitioner: Partitioner, func: (V, V) => V): RDD[(K, V)]

reduceByKey操作用于将RDD[K, V]中每个K对应的V值根据映射函数进行运算

scala> var rdd =  sc.makeRDD(Array(("A", 1), ("B", 2), ("C", 3), ("D", 4), ("E", 5), ("F", 6), ("G", 7), ("H", 8), ("I", 9), ("J", 10)))
rdd: org.apache.spark.rdd.RDD[(String, Int)] = ParallelCollectionRDD[9] at makeRDD at <console>:24

scala> rdd.reduceByKey((x, y) => x+y).collect
res17: Array[(String, Int)] = Array((D,10), (A,6), (B,9), (C,30))

scala> rdd.reduceByKey((x, y) => x*y).collect
res18: Array[(String, Int)] = Array((D,10), (A,6), (B,20), (C,3024))
  • reduceByKeyLocally(func: (V, V) => V): Map[(K, V)]

reduceByKeyLocally操作与reduceByKey类似,区别在于前者返回一个Map[K, V],后者返回RDD[K, V]

scala> var rdd =  sc.makeRDD(Array(("A", 1), ("B", 2), ("C", 3), ("D", 4), ("E", 5), ("F", 6), ("G", 7), ("H", 8), ("I", 9), ("J", 10)))
rdd: org.apache.spark.rdd.RDD[(String, Int)] = ParallelCollectionRDD[9] at makeRDD at <console>:24

scala> rdd.reduceByKeyLocally((x, y) => x+y)
res21: scala.collection.Map[String,Int] = Map(A -> 6, B -> 9, C -> 30, D -> 10)

scala> rdd.reduceByKeyLocally((x, y) => x*y)
res22: scala.collection.Map[String,Int] = Map(A -> 6, B -> 20, C -> 3024, D -> 10)
  • groupByKey(func: (V, V) => V): RDD[(K, V)]

  • groupByKey(): RDD[(K, Iterable[V])]

  • groupByKey(numPartitions: Int): RDD[(K, Iterable[V])]

  • groupByKey(partitioner: Partitioner): RDD[(K, Iterable[V])]

  • groupByKey操作用于将RDD[K, V]中每个K对应的V值合并到一个集合Iterable[V]中

scala> var rdd = sc.makeRDD(Array(("A", 1), ("A", 2), ("A", 3), ("B", 4), ("B", 5), ("C", 6), ("C", 7), ("C", 8), ("C", 9), ("D", 10)))
rdd: org.apache.spark.rdd.RDD[(String, Int)] = ParallelCollectionRDD[24] at makeRDD at <console>:24

scala> rdd.groupByKey.collect
res23: Array[(String, Iterable[Int])] = Array((D,CompactBuffer(10)), (A,CompactBuffer(1, 2, 3)), (B,CompactBuffer(4, 5)), (C,CompactBuffer(6, 7, 8, 9)))

 

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章