Мой искровой кластер работает в автономном режиме.
Я развертывал приложение весенней загрузки на искровом кластере с помощью spark-submit и обнаружил эту ошибку:
Я удалил несколько банокв spark / jars, которые несовместимы с моей подпружиненной банкой, такой как gson и servlet-api.
Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 3, 10.10.10.53, executor 0): java.lang.ClassCastException: cannot assign instance of scala.collection.immutable.List$SerializationProxy to field org.apache.spark.rdd.RDD.org$apache$spark$rdd$RDD$$dependencies_ of type scala.collection.Seq in instance of org.apache.spark.rdd.MapPartitionsRDD
at java.io.ObjectStreamClass$FieldReflector.setObjFieldValues(ObjectStreamClass.java:2287)
at java.io.ObjectStreamClass.setObjFieldValues(ObjectStreamClass.java:1417)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2293)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211)
...
Моя команда:
bin/spark-submit \
--master spark://localhost:7077 \
path_to_jar/xxx.jar
Мой build.gradle:
dependencies {
compile fileTree(dir: 'libs', include: ['*.jar'])
compile('org.springframework.boot:spring-boot-starter-web:2.1.3.RELEASE'){
exclude module: 'logback-classic'
exclude module: 'slf4j-log4j12'
}
compile('org.springframework.boot:spring-boot-starter-thymeleaf:2.1.3.RELEASE'){
exclude module: 'logback-classic'
exclude module: 'slf4j-log4j12'
}
compile('org.springframework.boot:spring-boot-configuration-processor:2.1.3.RELEASE')
compile('com.google.code.gson:gson:2.8.5')
compileOnly(group: 'org.apache.hadoop', name: 'hadoop-common', version: '2.7.7'){
exclude module: 'servlet-api'
}
compileOnly(group: 'org.apache.spark', name: 'spark-core_2.12', version: '2.4.0')
compileOnly(group: 'org.apache.spark', name: 'spark-mllib_2.12', version: '2.4.0')
}
SparkContext автоматически подключается в приложении весенней загрузки.
SparkContextBean.java
@Configuration
public class SparkContextBean {
@Autowired
private SparkProperties sparkProperties;
@Bean
@ConditionalOnMissingBean(SparkConf.class)
public SparkConf sparkConf(){
SparkConf conf = new SparkConf().setAppName(sparkProperties.getAppname());
return conf;
}
@Bean
@ConditionalOnMissingBean(JavaSparkContext.class)
public JavaSparkContext javaSparkContext() throws Exception {
return new JavaSparkContext(sparkConf());
}
}
Искровой код:
//hsidata is a JavaPairRDD<Integer, short[][]> value
Tuple2<double[], double[]> mk = hsidata.mapToPair(pair -> {
short[][] data = pair._2;
return JTool.CalcMK(data);
}).reduce((right, left) -> {
double[] mean = right._1;
int bands = mean.length;
double[] K = right._2;
int n = bands * (bands + 1) / 2;
for (int i = 0; i < bands; i++)
mean[i] = mean[i] + left._1[i];
for (int i = 0; i < n; i++)
K[i] = K[i] + left._2[i];
return new Tuple2<>(mean, K);
});