4.RDD常見操作

RDD

核心概念-彈性分佈式數據集

類似Map/Reduce始終使用KV數據對,Spark中RDD可以保存所有類型數據,類似數據庫中的一張表。RDD是不可變的,通過變換操作,返回全新RDD,原來RDD不變。

RDD兩種操作:

  1. 變換(Transformation)

map,filter,flatMap,groupByKey,reduceByKey,aggregateByKey,pipe,coalesce
2. 行動(Action)

reduce,collect,count,first,take,countByKey,foreach

分類

兩種基礎RDD

  • 並行集合(Parallelized Collections)

接受一個已經存在的Scala集合來運算

sc.parallelize(List(1,2,3,4,5,6)).sum()
  • Hadoop數據集(Hadoop Datasets)

hadoop支持的任意存儲系統即可,如本地文件/HDFS/Cassandra/Hbase/Amazon S3等,支持文本格式/Sequence Files/Hadoop InputFormat格式

常用方法

  • textFile
  • sequenceFile將hadoop的sequence file轉成RDD
  • hadoopRDD 將任意hadoop輸入轉成RDD,每個HDFS Block對應一個RDD分區

RDD 操作演示

  • 變換-map/filter/flatMap
val num = sc.parallelize(List(1,2,3,4,5,6,7,8))
num.map(_*2).collect
//res10: Array[Int] = Array(2, 4, 6, 8, 10, 12, 14, 16)

num.filter(_%2==0).collect
//res11: Array[Int] = Array(2, 4, 6, 8)

val num2 = sc.parallelize(List(List(1,2), List(3,4), List(5,6), List(7,8)))
num2.flatMap(x=>x.map(_+1)).collect
//num2: Array[Int] = Array(2, 3, 4, 5, 6, 7, 8, 9)
  • 變換-union/intersection/distinct
val a = sc.parallelize(List(1,2,2,3,3,4,5))
val b = sc.parallelize(List(3,4,5,6))

a.union(b).collect
//res19: Array[Int] = Array(1, 2, 2, 3, 3, 4, 5, 3, 4, 5, 6)

a.intersection(b).collect
//res21: Array[Int] = Array(3, 4, 5)

a.distinct.collect
//res17: Array[Int] = Array(1, 2, 3, 4, 5)
  • 變化-KV對
val g = sc.parallelize(List(("D",1), ("D",2), ("B",2), ("B",3), ("C",1), ("C",2)))

g.reduceByKey(_+_).collect
//res49: Array[(String, Int)] = Array((B,5), (C,3), (D,3))

g.sortByKey().collect
//res32: Array[(String, Int)] = Array((B,2), (B,3), (C,1), (C,2), (D,1), (D,2))

g.groupByKey().collect
//res50: Array[(String, Iterable[Int])] = Array((B,CompactBuffer(2, 3)), (C,CompactBuffer(1, 2)), (D,CompactBuffer(1, 2)))

val h = sc.parallelize(List(("C",3), ("C",4), ("C",5), ("D",1), ("D",2), ("E",1)))

g.join(h).collect
//res33: Array[(String, (Int, Int))] = Array((C,(1,3)), (C,(1,4)), (C,(1,5)), (C,(2,3)), (C,(2,4)), (C,(2,5)), (D,(1,1)), (D,(1,2)), (D,(2,1)), (D,(2,2)))  求笛卡爾積

g.cogroup(h).collect
//res34: Array[(String, (Iterable[Int], Iterable[Int]))] = Array((B,(CompactBuffer(2, 3),CompactBuffer())), (C,(CompactBuffer(1, 2),CompactBuffer(3, 4, 5))), (D,(CompactBuffer(1, 2),CompactBuffer(1, 2))), (E,(CompactBuffer(),CompactBuffer(1)))) 每個自己先分組再按key組合

  • 行動-action操作

執行action時纔會真正生成任務來執行

val c = sc.parallelize(List(1,2,3,4,5,6,7))

c.count
//res51: Long = 7

c.sum
//res52: Double = 28.0

c.reduce(_+_)
//res53: Int = 28

c.foreach(println)
/*
2
4
6
3
5
7
1
*/

val a = sc.parallelize(List("A","B","C","D"))
a.repartition(1).saveAsTextFile("hdfs://localhost:9000/output/a")
//saveAsObjectFile/saveAsSequenceFile等等
  • 行動-cache

還有一個非常重要的action,spark中優先通過內存存儲計算,對於熱點數據可以直接指定緩存在內存中,這個熱點數據也是分佈式存儲的,實際上可以很大

val c = sc.textFile("hdfs://localhost:9000/data/word.data").cache()

c.count
//res0: Long = 5    第一次訪問時緩存,較慢

c.count
//res1: Long = 5    第二次訪問時直接訪問緩存,很快

此時,如下可查看任務監控界面的存儲,此時數據已緩存

數據緩存

原創,轉載請註明來自

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章