所有RDD轉換算子如下:
map、faltmap、mapPartitions、mapPartitionsWithIndex、filter、sample、union、intersection、distinct、cartesian、pipe、coalesce、repartition、repartitionAndSortWithinPartitions、glom、randomSplit
具體解釋和例子
1. map
處理每個元素,一個元素生成一個元素。將函數應用與RDD中的每個元素,將返回值構成新的RDD
val sparkConf = new SparkConf().setAppName("transformations examples").setMaster("local[*]")
val sc = new SparkContext(sparkConf)
val inputRDD = sc.textFile("examples/src/main/resources/data.txt")
val mapRDD = inputRDD.map(line => line.split(" "))
mapRDD.foreach(array => println(array(0)))
sc.stop()
2. faltmap
處理每個元素,一個元素生成一個迭代器。將函數應用於RDD中的每個元素,將返回的迭代器的所有內容構成新的RDD,
即將舊RDD所有內容組成新RDD,只含有一個列表。通常用來切分單詞
val sparkConf = new SparkConf().setAppName("transformations examples").setMaster("local[*]")
val sc = new SparkContext(sparkConf)
val inputRDD = sc.textFile("examples/src/main/resources/data.txt")
val mapRDD = inputRDD.flatMap(line => line.split(" "))
mapRDD.foreach(println)
sc.stop()
3. mapPartitions
mapPartitions(foreachPartition)則是對rdd中的每個分區的迭代器進行操作。如果在map過程中需要頻繁創建額外的對象
(例如將rdd中的數據通過jdbc寫入數據庫,map需要爲每個元素創建一個鏈接而mapPartition爲每個partition創建一個鏈接),則mapPartitions效率比map高的多。
SparkSql或DataFrame默認會對程序進行mapPartition的優化。
val sparkConf = new SparkConf().setAppName("transformations examples").setMaster("local[*]")
val sc = new SparkContext(sparkConf)
val a = sc.parallelize(1 to 9, 3)
val result = a.mapPartitions(iter => {
var res = List[(Int, Int)]()
while (iter.hasNext) {
val cur = iter.next;
res.::=(cur, cur * 2)
}
res.iterator
})
println(result.collect().mkString)
println(result.first())
sc.stop()
4. mapPartitionsWithIndex
與mapPartitions類似,但帶了分區序號
val sparkConf = new SparkConf().setAppName("transformations examples").setMaster("local[*]")
val sc = new SparkContext(sparkConf)
val a = sc.parallelize(1 to 9, 3)
val result = a.mapPartitionsWithIndex((index: Int, iter: Iterator[Int]) => {
var res = List[(Int, Int, Int)]()
while (iter.hasNext) {
val cur = iter.next;
res.::=(index, cur, cur * 2)
}
res.iterator
})
println(result.collect().mkString)
println(result.first())
sc.stop()
5. filter
過濾操作。接受一個函數,將RDD中滿足該函數的元素放入新的RDD並返回
val sparkConf = new SparkConf().setAppName("transformations examples").setMaster("local[*]")
val sc = new SparkContext(sparkConf)
val inputRDD = sc.textFile("examples/src/main/resources/data.txt")
val mapRDD = inputRDD.filter(line => line.contains("li"))
mapRDD.take(10).foreach(println)
sc.stop()
6. sample
sample(withReplacement,fraction,seed):以指定的隨機種子隨機抽樣出數量爲fraction的數據,withReplacement表示是抽出的數據是否放回, true爲有放回的抽樣,false爲無放回的抽樣
val sparkConf = new SparkConf().setAppName("transformations examples").setMaster("local[*]")
val sc = new SparkContext(sparkConf)
val rdd = sc.parallelize(1 to 10)
//從RDD中隨機且無放回的抽出50%的數據
val value = rdd.sample(false, 0.5, 0)
value.collect.foreach(x => print(x + " "))
sc.stop()
7. union
將兩個RDD中的數據集進行合併,最終返回兩個RDD的並集,若RDD中存在相同的元素也不會去重
val sparkConf = new SparkConf().setAppName("transformations examples").setMaster("local[*]")
val sc = new SparkContext(sparkConf)
val rdd1 = sc.parallelize(1 to 3)
val rdd2 = sc.parallelize(3 to 5)
val unionRDD = rdd1.union(rdd2)
unionRDD.collect.foreach(x => print(x + " "))
sc.stop
8. intersection
返回兩個RDD的交集
val sparkConf = new SparkConf().setAppName("transformations examples").setMaster("local[*]")
val sc = new SparkContext(sparkConf)
val rdd1 = sc.parallelize(1 to 3)
val rdd2 = sc.parallelize(2 to 5)
val rdd = rdd1.intersection(rdd2)
rdd.foreach(println)
sc.stop()
9. distinct
對RDD中的元素去重
val sparkConf = new SparkConf().setAppName("transformations examples").setMaster("local[*]")
val sc = new SparkContext(sparkConf)
val list = List(1, 2, 3, 1, 3, 4, 5)
val rdd = sc.parallelize(list)
val rdd2 = rdd.distinct()
rdd2.foreach(println)
sc.stop()
10. cartesian
對兩個RDD中的所有元素進行笛卡爾積操作
val sparkConf = new SparkConf().setAppName("transformations examples").setMaster("local[*]")
val sc = new SparkContext(sparkConf)
val rdd1 = sc.parallelize(1 to 3)
val rdd2 = sc.parallelize(4 to 5)
val cartesianRDD = rdd1.cartesian(rdd2)
cartesianRDD.foreach(x => println(x + " "))
sc.stop()
11. pipe
通過shell命令(例如:Perl或bash腳本)將RDD的每個分區輸送給管道。RDD元素被寫入到進程的stdin中,stdout被作爲字符串RDD返回。
12. coalesce
對RDD的分區進行重新分區,shuffle默認值爲false,當shuffle=false時,不能增加分區數目,但不會報錯,只是分區個數還是原來的
val sparkConf = new SparkConf().setAppName("transformations examples").setMaster("local[*]")
val sc = new SparkContext(sparkConf)
val rdd = sc.parallelize(1 to 16, 4)
val coalesceRDD = rdd.coalesce(3)
val coalesceRDD2 = rdd.coalesce(5, false) //當suffle的值爲false時,不能增加分區數(即分區數不能從4->5)
val coalesceRDD3 = rdd.coalesce(5, true)
println("重新分區後的分區個數:" + coalesceRDD.partitions.size)
println("重新分區後的分區個數:" + coalesceRDD2.partitions.size)
println("重新分區後的分區個數:" + coalesceRDD3.partitions.size)
sc.stop()
13. repartition
調用的coalesce(numPartition,true)
val sparkConf = new SparkConf().setAppName("transformations examples").setMaster("local[*]")
val sc = new SparkContext(sparkConf)
val rdd = sc.parallelize(1 to 16, 4)
val reRDD = rdd.repartition(5)
println(reRDD.partitions.size)
sc.stop()
14. repartitionAndSortWithinPartitions
根據給定的分區器重新劃分RDD,並在每個分區中按其鍵對記錄進行排序。這比調用repartition並在每個分區中進行排序更有效,因爲它可以將排序推進到shuffle機器中。
15. glom
將RDD的每個分區中的類型爲T的元素轉換換數組Array[T]
val sparkConf = new SparkConf().setAppName("transformations examples").setMaster("local[*]")
val sc = new SparkContext(sparkConf)
val rdd = sc.parallelize(1 to 16,4)
val glomRDD = rdd.glom() //RDD[Array[T]]
glomRDD.foreach(rdd => {
for (r <- rdd) {
print(r + " ")
}
println()
}
)
sc.stop()
16. randomSplit
根據weight權重值將一個RDD劃分成多個RDD,權重越高劃分得到的元素較多的機率就越大
val sparkConf = new SparkConf().setAppName("transformations examples").setMaster("local[*]")
val sc = new SparkContext(sparkConf)
val rdd = sc.parallelize(1 to 10)
val randomSplitRDD = rdd.randomSplit(Array(1.0,2.0,7.0))
randomSplitRDD(0).foreach(x => print(x +" "))
println()
randomSplitRDD(1).foreach(x => print(x +" "))
println()
randomSplitRDD(2).foreach(x => print(x +" "))