开窗函数的调用格式为: 函数名(列名) over(partition by 列名 order by 列名)
如果你没有接触过开窗函数上面这个格式你也许会有些疑惑,但你只要了解一些聚合函数,那么理解开窗函数就非常容易了,我们知道聚合函数对一组值进行计算并返回单一的值,如sum(),count(),max(),min(),avg()等,这些函数常与group by 语句连用。但是一组数据只返回一组指是不能满足需求的,如我们常想知道的各个地区的第一名是谁? 各个班级的前几名是谁?这个时候需要每一组返回多个值。 用开窗函数解决就非常方便。
select * from
(select name,class,score ,rank() over(partition by class order by sorce)) as t
where t.rank = 1
Spark 代码如下:
object OverFunction extends App {
val sparkConf = new SparkConf().setAppName("over").setMaster("local[*]")
val spark = SparkSession.builder().config(sparkConf).getOrCreate()
import spark.implicits._
println("//*************** 原始的班级表 ****************//")
val scoreDF = spark.sparkContext.makeRDD(Array( Score("a", 1, 80),
Score("b", 1, 78),
Score("c", 1, 95),
Score("d", 2, 74),
Score("e", 2, 92),
Score("f", 3, 99),
Score("g", 3, 99),
Score("h", 3, 45),
Score("i", 3, 55),
Score("j", 3, 78))).toDF("name","class","score")
scoreDF.createOrReplaceTempView("score")
scoreDF.show()
println("//*************** 求每个班最高成绩学生的信息 ***************/")
println(" /******* 开窗函数的表 ********/")
spark.sql("select name,class,score, rank() over(partition by class order by score desc) rank from score").show()
println(" /******* 计算结果的表 *******")
spark.sql("select * from " +
"( select name,class,score,rank() over(partition by class order by score desc) rank from score) " +
"as t " +
"where t.rank=1").show()
//spark.sql("select name,class,score,row_number() over(partition by class order by score desc) rank from score").show()
println("/************** 求每个班最高成绩学生的信息(groupBY) ***************/")
spark.sql("select class, max(score) max from score group by class").show()
spark.sql("select a.name, b.class, b.max from score a, " +
"(select class, max(score) max from score group by class) as b " +
"where a.score = b.max").show()
spark.stop()
}