高階函數(十四)

簡介

  高階函數(Higher-Order Function)就是操作其他函數的函數。Scala 中允許使用高階函數, 高階函數可以使用其他函數作爲參數,或者使用函數作爲輸出結果。

簡單實例

函數參數

object Test {
   def main(args: Array[String]) {

      println( apply( layout, 10) )

   }
   // 函數 f 和 值 v 作爲參數,而函數 f 又調用了參數 v
   def apply(f: Int => String, v: Int) = f(v)

   def layout[A](x: A) = "[" + x.toString() + "]"

}

函數作爲返回值

object Test {
    def main(args: Array[String]) {
        val x=multiplyBy(10)
        println( x(50) )

   }

   def multiplyBy(factor:Double)=(x:Double) => factor*x
}   

常用高階函數

map函數

  所有集合類型都存在map函數

Array類型

//這裏面採用的是匿名函數的形式,字符串*n得到的是重複的n個字符串,這是scala中String操作的一個特點
scala> Array("spark","hive","hadoop").map((x:String)=>x*2)
res3: Array[String] = Array(sparkspark, hivehive, hadoophadoop)

//在函數與閉包那一小節,我們提到,上面的代碼還可以簡化
//省略匿名函數參數類型
scala> Array("spark","hive","hadoop").map((x)=>x*2)
res4: Array[String] = Array(sparkspark, hivehive, hadoophadoop)

//單個參數,還可以省去括號
scala> Array("spark","hive","hadoop").map(x=>x*2)
res5: Array[String] = Array(sparkspark, hivehive, hadoophadoop)

//參數在右邊只出現一次的話,還可以用佔位符的表示方式
scala> Array("spark","hive","hadoop").map(_*2)
res6: Array[String] = Array(sparkspark, hivehive, hadoophadoop)

List類型

scala> val list=List("Spark"->1,"hive"->2,"hadoop"->2)
list: List[(String, Int)] = List((Spark,1), (hive,2), (hadoop,2))

//寫法1
scala> list.map(x=>x._1)
res20: List[String] = List(Spark, hive, hadoop)
//寫法2
scala> list.map(_._1)
res21: List[String] = List(Spark, hive, hadoop)

scala> list.map(_._2)
res22: List[Int] = List(1, 2, 2)

Map類型

//寫法1
scala> Map("spark"->1,"hive"->2,"hadoop"->3).map(_._1)
res23: scala.collection.immutable.Iterable[String] = List(spark, hive, hadoop)

scala> Map("spark"->1,"hive"->2,"hadoop"->3).map(_._2)
res24: scala.collection.immutable.Iterable[Int] = List(1, 2, 3)

//寫法2
scala> Map("spark"->1,"hive"->2,"hadoop"->3).map(x=>x._2)
res25: scala.collection.immutable.Iterable[Int] = List(1, 2, 3)

scala> Map("spark"->1,"hive"->2,"hadoop"->3).map(x=>x._1)
res26: scala.collection.immutable.Iterable[String] = List(spark, hive, hadoop)

flatMap函數

//寫法1
scala> List(List(1,2,3),List(2,3,4)).flatMap(x=>x)
res40: List[Int] = List(1, 2, 3, 2, 3, 4)

//寫法2
scala> List(List(1,2,3),List(2,3,4)).flatMap(x=>x.map(y=>y))
res41: List[Int] = List(1, 2, 3, 2, 3, 4)

filter函數

scala> Array(1,2,4,3,5).filter(_>3)
res48: Array[Int] = Array(4, 5)

scala> List("List","Set","Array").filter(_.length>3)
res49: List[String] = List(List, Array)

scala> Map("List"->3,"Set"->5,"Array"->7).filter(_._2>3)
res50: scala.collection.immutable.Map[String,Int] = Map(Set -> 5, Array -> 7)

reduce函數

//寫法1
scala> Array(1,2,4,3,5).reduce(_+_)
res51: Int = 15

scala> List("Spark","Hive","Hadoop").reduce(_+_)
res52: String = SparkHiveHadoop

//寫法2
scala> Array(1,2,4,3,5).reduce((x:Int,y:Int)=>{println(x,y);x+y})
(1,2)
(3,4)
(7,3)
(10,5)
res60: Int = 15

scala> Array(1,2,4,3,5).reduceLeft((x:Int,y:Int)=>{println(x,y);x+y})
(1,2)
(3,4)
(7,3)
(10,5)
res61: Int = 15

scala> Array(1,2,4,3,5).reduceRight((x:Int,y:Int)=>{println(x,y);x+y})
(3,5)
(4,8)
(2,12)
(1,14)
res62: Int = 15

fold函數

scala> Array(1,2,4,3,5).foldLeft(0)((x:Int,y:Int)=>{println(x,y);x+y})
(0,1)
(1,2)
(3,4)
(7,3)
(10,5)
res66: Int = 15

scala> Array(1,2,4,3,5).foldRight(0)((x:Int,y:Int)=>{println(x,y);x+y})
(5,0)
(3,5)
(4,8)
(2,12)
(1,14)
res67: Int = 15

scala> Array(1,2,4,3,5).foldLeft(0)(_+_)
res68: Int = 15

scala> Array(1,2,4,3,5).foldRight(10)(_+_)
res69: Int = 25

// /:相當於foldLeft
scala> (0 /: Array(1,2,4,3,5)) (_+_)
res70: Int = 15


scala> (0 /: Array(1,2,4,3,5)) ((x:Int,y:Int)=>{println(x,y);x+y})
(0,1)
(1,2)
(3,4)
(7,3)
(10,5)
res72: Int = 15

函數柯里化

scala> def multiplyBy(factor:Double)=(x:Double)=>factor*x
multiplyBy: (factor: Double)Double => Double

//這是高階函數調用的另外一種形式
scala> multiplyBy(10)(50)
res77: Double = 500.0

部分應用函數

  部分應用函數指的是,函數有多個參數,而使用該函數時不提供所有參數(假設函數有3個函數),只提供0~2個參數。

//定義一個求和函數
scala> def sum(x:Int,y:Int,z:Int)=x+y+z
sum: (x: Int, y: Int, z: Int)Int

//不指定任何參數的部分應用函數
scala> val s1=sum _
s1: (Int, Int, Int) => Int = <function3>

scala> s1(1,2,3)
res91: Int = 6

 //指定兩個參數的部分應用函數
scala> val s2=sum(1,_:Int,3)
s2: Int => Int = <function1>

scala> s2(2)
res92: Int = 6

//指定一個參數的部分應用函數
scala> val s3=sum(1,_:Int,_:Int)
s3: (Int, Int) => Int = <function2>

scala> s3(2,3)
res93: Int = 6
//定義multiplyBy函數的部分應用函數,它返回的是一個函數
scala> val m=multiplyBy(10)_
m: Double => Double = <function1>

scala> m(50)
res94: Double = 500.0

  注:下劃線_並不是佔位符的作用,而是作爲部分應用函數的定義符。


忠於技術,熱愛分享。歡迎關注公衆號:java大數據編程,瞭解更多技術內容。

這裏寫圖片描述

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章