Мой скалярный код Spark выглядит так:
val input = sc.newAPIHadoopRDD(jconf, classOf[CqlInputFormat], classOf[LongWritable], classOf[Row])
Класс CqlInputFormat реализован в исходном коде Cassandra.Я пытался преобразовать его в Java-коды, и это сработало.Но его не удалось собрать с кодами Scala.
inferred type arguments[org.apache.hadoop.io.LongWritable,com.datastax.driver.core.Row,org.apache.cassandra.hadoop.cql3.CqlInputFormat] do not conform to method newAPIHadoopRDD's type parameter bounds [K,V,F <: org.apache.hadoop.mapreduce.InputFormat[K,V]]
[error] val input = sc.newAPIHadoopRDD(jconf, classOf[CqlInputFormat], classOf[LongWritable], classOf[Row]);
[error] /home/project/past/experiments/query/SparkApp/src/main/scala/SparkReader.scala:46: type mismatch;
[error] found : Class[org.apache.cassandra.hadoop.cql3.CqlInputFormat](classOf[org.apache.cassandra.hadoop.cql3.CqlInputFormat])
[error] required: Class[F]
[error] val input = sc.newAPIHadoopRDD(jconf, classOf[CqlInputFormat], classOf[LongWritable], classOf[Row]);
[error] ^
[error] /home/project/past/experiments/query/SparkApp/src/main/scala/SparkReader.scala:46: type mismatch;
[error] found : Class[org.apache.hadoop.io.LongWritable](classOf[org.apache.hadoop.io.LongWritable])
[error] required: Class[K]
[error] val input = sc.newAPIHadoopRDD(jconf, classOf[CqlInputFormat], classOf[LongWritable], classOf[Row]);
[error]
[error] /home/project/past/experiments/query/SparkApp/src/main/scala/SparkReader.scala:46: type mismatch;
[error] found : Class[com.datastax.driver.core.Row](classOf[com.datastax.driver.core.Row])
[error] required: Class[V]
[error] val input = sc.newAPIHadoopRDD(jconf, classOf[CqlInputFormat], classOf[LongWritable], classOf[Row]);
[error]
[error] four errors found
[error] (compile:compileIncremental) Compilation failed
Есть предложения?Спасибо.