Spark操作——轉換操作(一)

  1. 基礎轉換操作

  2. 鍵值轉換操作

 

基礎轉換操作

  • map[U](f:(T)=>U):RDD[U]

對RDD中的每個元素都應用一個指定的函數,以此產生一個新的RDD

scala> var rdd = sc.textFile("/Users/lyf/Desktop/test/data1.txt")
rdd: org.apache.spark.rdd.RDD[String] = /Users/lyf/Desktop/test/data1.txt MapPartitionsRDD[13] at textFile at <console>:24

scala> rdd.map(line => line.split(" ")).collect
res16: Array[Array[String]] = Array(Array(Hello, World), Array(Hello, Tom), Array(Hello, Jerry))
  • distince():RDD[(T)]

去除RDD中重複的元素,返回所有元素不重複的RDD

scala> var rdd = sc.parallelize(List(1,2,2,3,3,3,4,5))
rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[15] at parallelize at <console>:24

scala> rdd.distinct.collect
res18: Array[Int] = Array(4, 1, 5, 2, 3)
  • distince(numPartions: Int):RDD[T]

scala> var rdd = sc.parallelize(List(1,2,2,3,3,3,4,5))
rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[15] at parallelize at <console>:24

scala> var rddDistinct = rdd.distinct
rddDistinct: org.apache.spark.rdd.RDD[Int] = MapPartitionsRDD[27] at distinct at <console>:25

scala> rddDistinct.partitions.size
res21: Int = 4

scala> var rddDistinct = rdd.distinct(3)
rddDistinct: org.apache.spark.rdd.RDD[Int] = MapPartitionsRDD[30] at distinct at <console>:25

scala> rddDistinct.partitions.size
res22: Int = 3
  • flatMap[U](f:(T)=>TraversableOnce[U]):RDD[U]

scala> var rdd = sc.textFile("/Users/lyf/Desktop/test/data1.txt")
rdd: org.apache.spark.rdd.RDD[String] = /Users/lyf/Desktop/test/data1.txt MapPartitionsRDD[32] at textFile at <console>:24

scala> rdd.flatMap(line => line.split(" ")).collect
res23: Array[String] = Array(Hello, World, Hello, Tom, Hello, Jerry)

  • coalesce(numPartitions:Int, shuffle:Boolean=false):RDD[T]

  • repartition(numPartitions:Int):RDD[T]

兩者都是對RDD進行重新分區。coalesce使用HashPartitioner進行分區,第一個參數爲重分區數,第二個爲是否進行shuffle,默認爲false。repartition是coalesce操作shuffle爲true的封裝。

scala> var rdd = sc.textFile("/Users/lyf/Desktop/test/data1.txt")
rdd: org.apache.spark.rdd.RDD[String] = /Users/lyf/Desktop/test/data1.txt MapPartitionsRDD[35] at textFile at <console>:24

scala> rdd.partitions.size
res24: Int = 2

scala> var rdd_1 = rdd.coalesce(1)
rdd_1: org.apache.spark.rdd.RDD[String] = CoalescedRDD[36] at coalesce at <console>:25

// 如果分區數大於原來的分區數,則第二個參數必須要true,否則分區數不變
scala> var rdd_2 = rdd.coalesce(3)
rdd_2: org.apache.spark.rdd.RDD[String] = CoalescedRDD[37] at coalesce at <console>:25

scala> rdd_2.partitions.size
res26: Int = 2

scala> var rdd_2 = rdd.coalesce(5, true)
res37: Array[Array[Int]] = Array(Array(1, 2), Array(3, 4, 5), Array(6, 7), Array(8, 9, 10))
rdd_2: org.apache.spark.rdd.RDD[String] = MapPartitionsRDD[42] at coalesce at <console>:25

scala> rdd_2.partitions.size
res28: Int = 5

scala> var rdd_3 = rdd.repartition(5)
rdd_3: org.apache.spark.rdd.RDD[String] = MapPartitionsRDD[46] at repartition at <console>:25

scala> rdd_3.partitions.size
res29: Int = 5
  • randomSplit(weights: Array[Double], seed: Long=Utils.random.nextLong):Array[RDD[T]]

根據weights權重將一個RDD分割爲多個RDD,組成RDD數組,權重越高,被劃分到的概率就越大。

scala> var rdd = sc.parallelize(1 to 10, 10)
rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[47] at parallelize at <console>:24

// 將原RDD按照weights權重生成一個新的RDD數組
scala> var rddSplit = rdd.randomSplit(Array(1.0, 2.0, 3.0, 4.0))
rddSplit: Array[org.apache.spark.rdd.RDD[Int]] = Array(MapPartitionsRDD[48] at randomSplit at <console>:25, MapPartitionsRDD[49] at randomSplit at <console>:25, MapPartitionsRDD[50] at randomSplit at <console>:25, MapPartitionsRDD[51] at randomSplit at <console>:25)

scala> rddSplit.size
res30: Int = 4

scala> rddSplit(0).collect
res31: Array[Int] = Array()

scala> rddSplit(1).collect
res32: Array[Int] = Array(3, 8)

scala> rddSplit(2).collect
res33: Array[Int] = Array(1, 2, 9)

scala> rddSplit(3).collect
res34: Array[Int] = Array(4, 5, 6, 7, 10)
  • glom():RDD[Array[T]]

將RDD中每一個分區中所有類型爲T的數據轉變爲元素類型爲T的數組[Array[T]]

scala> var rdd = sc.parallelize(1 to 10, 4)
rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[52] at parallelize at <console>:24

scala> rdd.collect
res36: Array[Int] = Array(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)

scala> rdd.glom().collect
res37: Array[Array[Int]] = Array(Array(1, 2), Array(3, 4, 5), Array(6, 7), Array(8, 9, 10))
  • union(other: RDD[T]): RDD[T]

返回兩個RDD的並集,元素不進行去重

scala> var rdd1 = sc.makeRDD(1 to 3, 1)
rdd1: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[54] at makeRDD at <console>:24

scala> var rdd2 = sc.makeRDD(2 to 5, 1)
rdd2: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[55] at makeRDD at <console>:24

scala> rdd1.union(rdd2).collect
res38: Array[Int] = Array(1, 2, 3, 2, 3, 4, 5)
  • intersection(other: RDD[T]): RDD[T]

  • intersection(other: RDD[T], numPartitions:Int):RDD[T]

  • intersection(other: RDD[T], partitioner: Partitioner): RDD[T]

返回兩個RDD的交集,元素不進行不去重。參數numPartitions指定分區數,參數partitioner指定分區函數

scala> rdd1.intersection(rdd2).collect
res39: Array[Int] = Array(3, 2)
  • subtract(other: RDD[T]): RDD[T]

  • subtract(other: RDD[T], numPartitions:Int): RDD[T]

  • subtract(other: RDD[T], p: Partitioner): RDD[T]

返回兩個RDD的差集,元素不進行去重

scala> rdd1.subtract(rdd2).collect
res40: Array[Int] = Array(1)

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章