默認最小分區數<=2,具體取值依賴於默認並行度
熟悉Spark的分區對於Spark性能調優很重要,如果用戶沒有指定分區數,那麼默認分區數 <= 2,具體是多少,與默認並行度相關
/**
* Default min number of partitions for Hadoop RDDs when not given by user
* Notice that we use math.min so the "defaultMinPartitions" cannot be higher than 2.
* 用戶沒有指定分區數的時候,默認分區數 <= 2
*/
def defaultMinPartitions: Int = math.min(defaultParallelism, 2)
默認並行度
//查看默認並行度(等於系統CPU邏輯核數)
scala> sc.defaultParallelism
res0: Int = 24
//查看CPU物理個數
grep ‘physical id’ /proc/cpuinfo | sort -u | wc -l
2
//查看CPU核數
grep ‘core id’ /proc/cpuinfo | sort -u | wc -l
6
//查看CPU邏輯核數
grep ‘processor’ /proc/cpuinfo | sort -u | wc -l
24
設置並行度
1、在spark-defaults.conf中設置,重啓spark-shell 後生效
spark.default.parallelism=48
scala> sc.defaultParallelism
res0: Int = 48
`
2、代碼裏指定並行度
object Test {
def main(args: Array[String]) {
val spark = SparkSession.builder().appName("xxx").master("local")
.config("spark.default.parallelism",10)
.getOrCreate()
println(spark.sparkContext.defaultParallelism)
spark.stop()
}
}
//輸出10
3、本地模式並行度
spark-shell,未指定master = local[n],默認值 = cpu邏輯核數
spark-shell,指定master爲local[n]時候,默認值 = n
[hadoop@slave106 bin]$ spark-shell --master local[6]
...
scala> sc.defaultParallelism
res0: Int = 6
sc.textFile(“hdfs文件”),分區數
rdd的分區數 = max(hdfs文件的block數,sc.defaultMinPartitions)
//塊大小128M,test_1有一個塊,test_2有5個塊
[hadoop@slave106 yk]$ hdfs dfs -ls -h /test_*
-rw-r--r-- 3 hadoop supergroup 98.9 M 2019-08-12 22:34 /test_1
-rw-r--r-- 3 hadoop supergroup 592.8 M 2019-08-12 22:33 /test_2
scala> val part = sc.textFile("/test_1")
scala> part.partitions.size
res1: Int = 2
scala> val part = sc.textFile("/test_2")
scala> part.partitions.size
res2: Int = 5
//加載文件時候指定分區數
scala> val part = sc.textFile("/test_2",6)
scala> part.partitions.size
res3: Int = 6
Spark中各Stage的Task數據取決於分區數