spark 排序並添加編號添加行號和初始值

使用Spark sortBy進行排序,使用zipWithIndex進行行號添加通常可用於甲酸count或添加索引或計算當前值是第幾大等用途

Case1 :全局排序,輸出原始值和對應的行號索引

啓動spark-shell並使用yarn提交
[[email protected] ~]$ spark-shell --master yarn --executor-memory 4G
scala> val data = Array(1, 10,12,39,23456,8,2, 3,50,87, 4,1,7,3,10000002, 5);
data: Array[Int] = Array(1, 10, 12, 39, 23456, 8, 2, 3, 50, 87, 4, 1, 7, 3, 10000002, 5)

scala> var rdd = sc.parallelize(data,5);//將data數組轉換成rdd,使用5個partition
rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[57] at parallelize at <console>:26

scala> var sortrdd = rdd.sortBy(x=>x);//根據值排序
sortrdd: org.apache.spark.rdd.RDD[Int] = MapPartitionsRDD[62] at sortBy at <console>:28

scala> var seqrdd = sortrdd.zipWith
zipWithIndex   zipWithUniqueId

scala> var seqrdd = sortrdd.zipWithIndex();//添加行號
seqrdd: org.apache.spark.rdd.RDD[(Int, Long)] = ZippedWithIndexRDD[63] at zipWithIndex at <console>:30

scala> seqrdd.collect();
res25: Array[(Int, Long)] = Array((1,0), (1,1), (2,2), (3,3), (3,4), (4,5), (5,6), (7,7), (8,8), (10,9), (12,10), (39,11), (50,12), (87,13), (23456,14), (10000002,15))

Case2 :輸出排序後的值和行號,行號可以指定初始值(偏移值)

scala> val data = Array(1, 10,12,39,23456,8,2, 3,50,87, 4,1,7,3,10000002, 5);
data: Array[Int] = Array(1, 10, 12, 39, 23456, 8, 2, 3, 50, 87, 4, 1, 7, 3, 10000002, 5)

scala> var rdd = sc.parallelize(data,5);//將data數組轉換成rdd,使用5個partition
rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[57] at parallelize at <console>:26

scala> var sortrdd = rdd.sortBy(x=>x);//根據值排序
sortrdd: org.apache.spark.rdd.RDD[Int] = MapPartitionsRDD[62] at sortBy at <console>:28

scala> var seqrdd = sortrdd.zipWith
zipWithIndex   zipWithUniqueId

#咱們只需要在case1的基礎上加上一個map操作即可
~~scala> var seqrdd = sortrdd.zipWithIndex();//添加行號~~ 
scala> seqrdd = sortrdd.zipWithIndex().map{case(x,y)=>(x,y+10)};//x爲原始的值,y爲行號,10爲行號,10,10爲咱們的行號初始值默認初始值爲0
seqrdd: org.apache.spark.rdd.RDD[(Int, Long)] = MapPartitionsRDD[65] at map at <console>:32

scala> seqrdd.collect();
res26: Array[(Int, Long)] = Array((1,10), (1,11), (2,12), (3,13), (3,14), (4,15), (5,16), (7,17), (8,18), (10,19), (12,20), (39,21), (50,22), (87,23), (23456,24), (10000002,25))

Case3 :輸出排序後的值和行號,並添加一列帶初始值的行號

scala> val data = Array(1, 10,12,39,23456,8,2, 3,50,87, 4,1,7,3,10000002, 5);
data: Array[Int] = Array(1, 10, 12, 39, 23456, 8, 2, 3, 50, 87, 4, 1, 7, 3, 10000002, 5)

scala> var rdd = sc.parallelize(data,5);//將data數組轉換成rdd,使用5個partition
rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[57] at parallelize at <console>:26

scala> var sortrdd = rdd.sortBy(x=>x);//根據值排序
sortrdd: org.apache.spark.rdd.RDD[Int] = MapPartitionsRDD[62] at sortBy at <console>:28

scala> var seqrdd = sortrdd.zipWith
zipWithIndex   zipWithUniqueId

#咱們只需要在case2的基礎上將map操作輸出多一列即可
~~scala> var seqrdd = sortrdd.zipWithIndex();//添加行號~~ 
~~scala> seqrdd = sortrdd.zipWithIndex().map{case(x,y)=>(x,y+10)};~~ 
scala> var seqrdd2 = sortrdd.zipWithIndex().map{case(x,y)=>(x,y,y+10)};//x爲原始的值,y爲行號,10爲行號,10,10爲咱們的行號初始值默認初始值爲0
seqrdd2: org.apache.spark.rdd.RDD[(Int, Long, Long)] = MapPartitionsRDD[67] at map at <console>:30

scala> seqrdd2.collect();
res27: Array[(Int, Long, Long)] = Array((1,0,10), (1,1,11), (2,2,12), (3,3,13), (3,4,14), (4,5,15), (5,6,16), (7,7,17), (8,8,18), (10,9,19), (12,10,20), (39,11,21), (50,12,22), (87,13,23), (23456,14,24), (10000002,15,25))
去掉輸出中的括號然後保存到hdfs
scala> seqrdd.map(line =>{line._2+","+line._1});
res1: org.apache.spark.rdd.RDD[String] = MapPartitionsRDD[9] at map at <console>:31

scala> seqrdd.saveAsTextFile("/user/prod_kylin/tmp/sparktest/kylin_intermediate_dm_pub_passenger_objec_o_spark_nglobbal_fac4f04e_60a0_3e3a_bf3a_488766f91446__group_by/");
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章