2016 第四講 Scala模式匹配、類型系統徹底精通與Spark源碼閱讀

20160104 19:00-21:00 課程筆錄

Scala模式匹配:可以對值,類型,集合進行匹配

(1)值匹配

// a. data值爲Spark,Hadoop或其他時
def bigData(data: String){
data match {
  case "Spark" => println("Wow!!!")
  case "Hadoop" => println("Ok")
  case _ => println(" Something others")
bigData:(data: String)Unit

>bigData("Spark")
Spark
>bigData("Hadoop")
Hadoop
>bigData("else")
Something others


//b. 注意 data值爲Flink時的寫法,  data_

def bigData(data: String){

data match {
case "Spark" => println("Wow!!!")
case "Hadoop" => println("Ok")
case data_  ifdata_ == "Flink" => println("Cool")
case _  => println ("Something else")

>bigData("Flink")

Cool


//c.

def bigData(data: String){

data match {

case "Spark" => println("Wow!!!")
case "Hadoop" => println("Ok")
case _  if data == "Flink" => println("Cool")
case _  => println("Something else")
>bigData("Flink")

//d. 

def bigData(data: String){
data match {
case "Spark" => println("Wow!!!")
case "Hadoop" => println("Ok")
case data  if data == "Flink" => println("Cool")
case _  => println("Something else")
>bigData("Flink")

(2)類型匹配

import java.io._
def exception(e:Exception) {
e match {
            case fileException: FileNotFoundException => println("File not found :" + fileException)
            case _: Exception => println("Exception getting thread dump from executor $executorId", e)
}

}

>exception(new FileNotFoundException(" Oops!!!")


(3)集合匹配

def data(array: Array[String]){
array match {
case Array("Scala") =>println("Scala")
case Array(spark, hadoop, flink) =>println(spark + " : " + hadoop + " : " + flink)
case Array("Spark", _*) => println("Spark ...")
case _ => println("Unknown")
}
}

>data(Array("Scala"))
Scala

>data(Array("hello","world","google"))
hello : world : google

>data(Array("Spark"))   
Spark...

>data(Array("Spark","test"))
Spark...

類型系統
//相當於java的Bean
case class  
case class Person
//實例化
case class Person(name: String)

>class Person

>case class Worker(name: String, salary: Double)  extends Person

>case class Student(name: String, score: Double)  extends Person

def sayHi(person: Person){

    person match {
        case Student(name, score) => println(name + score)
        case Worker(name, salary) => println(name + salary)
        case _ => println("Unknown")  
    }
}


>sayHi(Worker("Spark ",6.4))
>sayHi(Student("Spark ",6.5))
>sayHi(Student("Hadoop ",6.6)

sayHi(Worker("Spark ",6.4))


類型體系

泛型類、泛型函數、類型界定(upperBound)、視圖界定(View Bounds)、協變和逆變等

類型參數,主要用於類型限定,加強代碼的健壯性

1. 泛型類和泛型函數

2. 類型界定

對類型也可以限定邊界,upperBounds(<:表示,上邊界,是它或它的子類),(>:表示,下邊界,是某個類的父類或類本身)

3. 視圖界定

//

Manifest -> ClassTag

abstract class RDD[T: ClassTag](
@transient private var _sc: SparkContext,
@transient private var deps: Seq[Dependency[_]]
)

RDD[T: ClassTag]
T:ClassTag

//
{{{
* scala> def mkArray[T : ClassTag](elems: T*) = Array[T](elems: _*)
* mkArray: [T](elems: T*)(implicit evidence$1: scala.reflect.ClassTag[T])Array[T]
*
* scala> mkArray(42, 13)
* res0: Array[Int] = Array(42, 13)
*
* scala> mkArray("Japan","Brazil","Germany")
* res1: Array[String] = Array(Japan, Brazil, Germany)
* }}}

作業:
閱讀Spark源碼 RDD、HadoopRDD、SparkContext、Master、Worker的源碼,
並分析裏面使用的所有的模式匹配和類型參數的內容,  
從http://weibo.com/ilovepains獲取代碼 並完成作業。



發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章