Не зависит от разделов таблицы Hive, но зависит от того, какую версию spark вы используете:
Для spark <2.0 </strong>
***using rdd and then creating datframe***
If you are creating an RDD , you can explicitly give no of partitions:
val rdd = sc.textFile("filepath" , 4)
as in above example it is 4 .
***directly creating datframe***
It depends on the Hadoop configuration (min / max split size)
You can use Hadoop configuration options:
mapred.min.split.size.
mapred.max.split.size
as well as HDFS block size to control partition size for filesystem based formats*.
val minSplit: Int = ???
val maxSplit: Int = ???
sc.hadoopConfiguration.setInt("mapred.min.split.size", minSplit)
sc.hadoopConfiguration.setInt("mapred.max.split.size", maxSplit)
Для искры> 2,0
***using rdd and then creating datframe*** :
same as mentioned in spark <2.0
***directly creating datframe***
You can use spark.sql.files.maxPartitionBytes configuration:
spark.conf.set("spark.sql.files.maxPartitionBytes", maxSplit)
Also keep in mind:
Datasets created from RDD inherit number of partitions from its parent.