默认最小分区数<=2,具体取值依赖于默认并行度
熟悉Spark的分区对于Spark性能调优很重要,如果用户没有指定分区数,那么默认分区数 <= 2,具体是多少,与默认并行度相关
/**
* Default min number of partitions for Hadoop RDDs when not given by user
* Notice that we use math.min so the "defaultMinPartitions" cannot be higher than 2.
* 用户没有指定分区数的时候,默认分区数 <= 2
*/
def defaultMinPartitions: Int = math.min(defaultParallelism, 2)
默认并行度
//查看默认并行度(等于系统CPU逻辑核数)
scala> sc.defaultParallelism
res0: Int = 24
//查看CPU物理个数
grep ‘physical id’ /proc/cpuinfo | sort -u | wc -l
2
//查看CPU核数
grep ‘core id’ /proc/cpuinfo | sort -u | wc -l
6
//查看CPU逻辑核数
grep ‘processor’ /proc/cpuinfo | sort -u | wc -l
24
设置并行度
1、在spark-defaults.conf中设置,重启spark-shell 后生效
spark.default.parallelism=48
scala> sc.defaultParallelism
res0: Int = 48
`
2、代码里指定并行度
object Test {
def main(args: Array[String]) {
val spark = SparkSession.builder().appName("xxx").master("local")
.config("spark.default.parallelism",10)
.getOrCreate()
println(spark.sparkContext.defaultParallelism)
spark.stop()
}
}
//输出10
3、本地模式并行度
spark-shell,未指定master = local[n],默认值 = cpu逻辑核数
spark-shell,指定master为local[n]时候,默认值 = n
[hadoop@slave106 bin]$ spark-shell --master local[6]
...
scala> sc.defaultParallelism
res0: Int = 6
sc.textFile(“hdfs文件”),分区数
rdd的分区数 = max(hdfs文件的block数,sc.defaultMinPartitions)
//块大小128M,test_1有一个块,test_2有5个块
[hadoop@slave106 yk]$ hdfs dfs -ls -h /test_*
-rw-r--r-- 3 hadoop supergroup 98.9 M 2019-08-12 22:34 /test_1
-rw-r--r-- 3 hadoop supergroup 592.8 M 2019-08-12 22:33 /test_2
scala> val part = sc.textFile("/test_1")
scala> part.partitions.size
res1: Int = 2
scala> val part = sc.textFile("/test_2")
scala> part.partitions.size
res2: Int = 5
//加载文件时候指定分区数
scala> val part = sc.textFile("/test_2",6)
scala> part.partitions.size
res3: Int = 6
Spark中各Stage的Task数据取决于分区数