spark

實現的思路是使用Spark內置函數,

combineByKeyWithClassTag函數,藉助HashSet的排序,此例是取組內最大的N個元素一下是代碼:

createcombiner就簡單的將首個元素裝進HashSet然後返回就可以了;

mergevalue插入元素之後,如果元素的個數大於N就刪除最小的元素;

mergeCombiner在合併之後,如果總的個數大於N,就從一次刪除最小的元素,知道Hashset內只有N 個元素;
import org.apache.spark.rdd.RDD
import org.apache.spark.sql.SparkSession

import scala.collection.mutable

object Main {
  val N = 3

  def main(args: Array[String]): Unit = {
    val spark = SparkSession
      .builder()
      .master("local[*]")
      .appName("Spark")
      .getOrCreate()
    val sc = spark.sparkContext
    var SampleDataset = List(
      ("apple.com", 3L),
      ("apple.com", 4L),
      ("apple.com", 1L),
      ("apple.com", 9L),
      ("google.com", 4L),
      ("google.com", 1L),
      ("google.com", 2L),
      ("google.com", 3L),
      ("google.com", 11L),
      ("google.com", 32L),
      ("slashdot.org", 11L),
      ("slashdot.org", 12L),
      ("slashdot.org", 13L),
      ("slashdot.org", 14L),
      ("slashdot.org", 15L),
      ("slashdot.org", 16L),
      ("slashdot.org", 17L),
      ("slashdot.org", 18L),
      ("microsoft.com", 5L),
      ("microsoft.com", 2L),
      ("microsoft.com", 6L),
      ("microsoft.com", 9L),
      ("google.com", 4L))
    val urdd: RDD[(String, Long)] = sc.parallelize(SampleDataset).map((t) => (t._1, t._2))
    var topNs = urdd.combineByKeyWithClassTag(
      //createCombiner
      (firstInt: Long) => {
        var uset = new mutable.TreeSet[Long]()
        uset += firstInt
      },
      // mergeValue
      (uset: mutable.TreeSet[Long], value: Long) => {
        uset += value
        while (uset.size > N) {
          uset.remove(uset.min)
        }
        uset
      },
      //mergeCombiners
      (uset1: mutable.TreeSet[Long], uset2: mutable.TreeSet[Long]) => {
        var resultSet = uset1 ++ uset2
        while (resultSet.size > N) {
          resultSet.remove(resultSet.min)
        }
        resultSet
      }
    )
    import spark.implicits._
    topNs.flatMap(rdd => {
      var uset = new mutable.HashSet[String]()
      for (i <- rdd._2.toList) {
        uset += rdd._1 + "/" + i.toString
      }
      uset
    }).map(rdd => {
      (rdd.split("/")(0), rdd.split("/")(1))
    }).toDF("key", "TopN_values").show()
  }
}

以下是執行的結果:

+-------------+-----------+
|          key|TopN_values|
+-------------+-----------+
|   google.com|          4|
|   google.com|         11|
|   google.com|         32|
|microsoft.com|          9|
|microsoft.com|          6|
|microsoft.com|          5|
|    apple.com|          4|
|    apple.com|          9|
|    apple.com|          3|
| slashdot.org|         16|
| slashdot.org|         17|
| slashdot.org|         18|
+-------------+-----------+

使用到的依賴:

 <dependencies>
        <!-- https://mvnrepository.com/artifact/org.apache.spark/spark-core_2.11 -->
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-core_2.11</artifactId>
            <version>2.2.0</version>
        </dependency>
        <!-- https://mvnrepository.com/artifact/org.apache.spark/spark-sql_2.11 -->
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-sql_2.11</artifactId>
            <version>2.2.0</version>
        </dependency>

    </dependencies>
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章