Spark中的checkpoint作用與用法
2017年07月27日 23:19:11
閱讀數:7234
checkpoint的意思就是建立檢查點,類似於快照,例如在spark計算裏面 計算流程DAG特別長,服務器需要將整個DAG計算完成得出結果,但是如果在這很長的計算流程中突然中間算出的數據丟失了,spark又會根據RDD的依賴關係從頭到尾計算一遍,這樣子就很費性能,當然我們可以將中間的計算結果通過cache或者persist放到內存或者磁盤中,但是這樣也不能保證數據完全不會丟失,存儲的這個內存出問題了或者磁盤壞了,也會導致spark從頭再根據RDD計算一遍,所以就有了checkpoint,其中checkpoint的作用就是將DAG中比較重要的中間數據做一個檢查點將結果存儲到一個高可用的地方(通常這個地方就是HDFS裏面)
· 說道checkpoint就得說說RDD的依賴
比如我們計算wordcount的時候:
sc.textFile("hdfspath").flatMap(_.split(" ")).map((_,1)).reduceByKey(_+_).saveAsTextFile("hdfspath")
1.在textFile讀取hdfs的時候就會先創建一個HadoopRDD,其中這個RDD是去讀取hdfs的數據key爲偏移量value爲一行數據,因爲通常來講偏移量沒有太大的作用所以然後會將HadoopRDD轉化爲MapPartitionsRDD,這個RDD只保留了hdfs的數據
2.flatMap 產生一個RDD MapPartitionsRDD
3.map 產生一個RDD MapPartitionsRDD
4.reduceByKey 產生一個RDD ShuffledRDD
5.saveAsTextFile 產生一個RDD MapPartitionsRDD
可以根據查看RDD的依賴:
scala> val rdd = sc.textFile("hdfs://lijie:9000/checkpoint0727/c1a51ee9-1daf-4169-991e-b290f88bac20/rdd-0/part-00000").flatMap(_.split(" ")).map((_,1)).reduceByKey(_+_)
rdd: org.apache.spark.rdd.RDD[(String, Int)] = ShuffledRDD[29] at reduceByKey at <console>:27
scala> rdd.toDebugString
res3: String =
(2) ShuffledRDD[29] at reduceByKey at <console>:27 []
+-(2) MapPartitionsRDD[28] at map at <console>:27 []
| MapPartitionsRDD[27] at flatMap at <console>:27 []
| hdfs://lijie:9000/checkpoint0727/c1a51ee9-1daf-4169-991e-b290f88bac20/rdd-0/part-00000 MapPartitionsRDD[26] at textFile at <console>:27 []
| hdfs://lijie:9000/checkpoint0727/c1a51ee9-1daf-4169-991e-b290f88bac20/rdd-0/part-00000 HadoopRDD[25] at textFile at <console>:27 []
· 怎麼建立checkpoint
首先需要用sparkContext設置hdfs的checkpoint的目錄(如果不設置使用checkpoint會拋出異常:throw new SparkException(“Checkpoint directory has not been set in the SparkContext”):
scala> sc.setCheckpointDir("hdfs://lijie:9000/checkpoint0727")
執行了上面的代碼,hdfs裏面會創建一個目錄:
/checkpoint0727/c1a51ee9-1daf-4169-991e-b290f88bac20
然後執行checkpoint
scala> val rdd1 = sc.parallelize(1 to 10000)rdd1: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[0] at parallelize at <console>:27
scala> rdd1.checkpoint
發現hdfs中還是沒有數據,通過collect然後hdfs就有數據了,說明checkpoint也是個transformation的算子
scala> rdd1.sum
res2: Double = 5.0005E7
#其中hdfs
[root@lijie hadoop]# hadoop dfs -ls /checkpoint0727/c1a51ee9-1daf-4169-991e-b290f88bac20/rdd-0
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
Found 2 items
-rw-r--r-- 3 root supergroup 53404 2017-07-24 14:26 /checkpoint0727/c1a51ee9-1daf-4169-991e-b290f88bac20/rdd-0/part-00000
-rw-r--r-- 3 root supergroup 53404 2017-07-24 14:26 /checkpoint0727/c1a51ee9-1daf-4169-991e-b290f88bac20/rdd-0/part-00001
但是執行的時候相當於走了兩次流程,sum的時候前面計算了一遍,然後checkpoint又會計算一次,所以一般我們先進行cache然後做checkpoint就會只走一次流程,checkpoint的時候就會從剛cache到內存中取數據寫入hdfs中,如下:
rdd.cache()
rdd.checkpoint()
rdd.collect
其中,在checkpoint的時候強烈建議先進行cache,並且當你checkpoint執行成功了,那麼前面所有的RDD依賴都會被銷燬,如下:
/**
* Mark this RDD for checkpointing. It will be saved to a file inside the checkpoint
* directory set with `SparkContext#setCheckpointDir` and all references to its parent
* RDDs will be removed. This function must be called before any job has been
* executed on this RDD. It is strongly recommended that this RDD is persisted in
* memory, otherwise saving it on a file will require recomputation.
*/