Невозможно запустить StructuredNetworkCharacterCount Пример - PullRequest
0 голосов
/ 15 апреля 2020

Ссылка: https://github.com/dotnet/spark/blob/master/examples/Microsoft.Spark.CSharp.Examples/Sql/Streaming/StructuredNetworkCharacterCount.cs

https://docs.microsoft.com/en-us/dotnet/spark/tutorials/get-started

Я использую Spark 2.4.1 и java 1.8 и делаю tnet core 3.1.

Вот мой код

using System;
using Microsoft.Spark.Sql;
using static Microsoft.Spark.Sql.Functions;
namespace mySparkApp
{
    class Program
    {
        static void Main(string[] args)
        {
              SparkSession spark = SparkSession
                .Builder()
                .AppName("Streaming example with a UDF")
                .GetOrCreate();

            string hostname = "localhost";
            int port = 9999;

             DataFrame lines = spark
                .ReadStream()
                .Format("socket")
                .Option("host", hostname)
                .Option("port", port)
                .Load();

                 Func<Column, Column> udfArray =
                Udf<string, string[]>((str) => new string[] { str, $"{str} {str.Length}" });
            DataFrame arrayDF = lines.Select(Explode(udfArray(lines["value"])));

             Microsoft.Spark.Sql.Streaming.StreamingQuery query = arrayDF
                .WriteStream()
                .Format("console")
                .Start();

            query.AwaitTermination();
        }
    }
}

Но появляется ошибка при запуске приложения.

D:\work\Tutorial\mySparkApp>dotnet build
Microsoft (R) Build Engine version 16.5.0+d4cbfca49 for .NET Core
Copyright (C) Microsoft Corporation. All rights reserved.

  Restore completed in 186.8 ms for D:\work\Tutorial\mySparkApp\mySparkApp.csproj.
  mySparkApp -> D:\work\Tutorial\mySparkApp\bin\Debug\netcoreapp3.1\mySparkApp.dll

Build succeeded.
    0 Warning(s)
    0 Error(s)

Time Elapsed 00:00:04.27

D:\work\Tutorial\mySparkApp>spark-submit --class org.apache.spark.deploy.dotnet.DotnetRunner --master local "D:\work\Tutorial\mySparkApp\bin\Debug\netcoreapp3.1\microsoft-spark-2.4.x-0.10.0.jar" dotnet "D:\work\Tutorial\mySparkApp\bin\Debug\netcoreapp3.1\mySparkApp.dll"
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.apache.spark.unsafe.Platform (file:/D:/work/spark-2.4.3-bin-hadoop2.7/jars/spark-unsafe_2.11-2.4.3.jar) to method java.nio.Bits.unaligned()
WARNING: Please consider reporting this to the maintainers of org.apache.spark.unsafe.Platform
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
20/04/15 15:03:54 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where
applicable
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
20/04/15 15:03:54 INFO DotnetRunner: Starting DotnetBackend with dotnet.
20/04/15 15:03:55 INFO DotnetRunner: Port number used by DotnetBackend is 27290
20/04/15 15:03:55 INFO DotnetRunner: Adding key=spark.jars and value=file:/D:/work/Tutorial/mySparkApp/bin/Debug/netcoreapp3.1/microsoft-spark-2.4.x-0.10.0.jar to environment
20/04/15 15:03:55 INFO DotnetRunner: Adding key=spark.app.name and value=org.apache.spark.deploy.dotnet.DotnetRunner to environment
20/04/15 15:03:55 INFO DotnetRunner: Adding key=spark.submit.deployMode and value=client to environment
20/04/15 15:03:55 INFO DotnetRunner: Adding key=spark.master and value=local to environment
[2020-04-15T09:33:55.5949120Z] [BADSHAH] [Info] [ConfigurationService] Using port 27290 for connection.
[2020-04-15T09:33:55.6008631Z] [BADSHAH] [Info] [JvmBridge] JvMBridge port is 27290
20/04/15 15:03:55 INFO SparkContext: Running Spark version 2.4.3
20/04/15 15:03:55 INFO SparkContext: Submitted application: Streaming example with a UDF
20/04/15 15:03:55 INFO SecurityManager: Changing view acls to: ABC
20/04/15 15:03:55 INFO SecurityManager: Changing modify acls to: ABC
20/04/15 15:03:55 INFO SecurityManager: Changing view acls groups to:
20/04/15 15:03:55 INFO SecurityManager: Changing modify acls groups to:
20/04/15 15:03:55 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(ABC); groups with view permissions: Set(); users  with modify permissions: Set(ABC); groups with modify permissions: Set()
20/04/15 15:03:56 INFO Utils: Successfully started service 'sparkDriver' on port 27296.
20/04/15 15:03:56 INFO SparkEnv: Registering MapOutputTracker
20/04/15 15:03:56 INFO SparkEnv: Registering BlockManagerMaster
20/04/15 15:03:56 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
20/04/15 15:03:56 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
20/04/15 15:03:56 INFO DiskBlockManager: Created local directory at C:\Users\ABC\AppData\Local\Temp\blockmgr-53abec7f-0478-44bc-820e-902a65d545a4
20/04/15 15:03:56 INFO MemoryStore: MemoryStore started with capacity 434.4 MB
20/04/15 15:03:56 INFO SparkEnv: Registering OutputCommitCoordinator
20/04/15 15:03:56 INFO Utils: Successfully started service 'SparkUI' on port 4040.
20/04/15 15:03:56 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://BADSHAH:4040
20/04/15 15:03:56 INFO SparkContext: Added JAR file:/D:/work/Tutorial/mySparkApp/bin/Debug/netcoreapp3.1/microsoft-spark-2.4.x-0.10.0.jar at spark://BADSHAH:27296/jars/microsoft-spark-2.4.x-0.10.0.jar with timestamp 1586943236666
20/04/15 15:03:56 INFO Executor: Starting executor ID driver on host localhost
20/04/15 15:03:56 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 27305.
20/04/15 15:03:56 INFO NettyBlockTransferService: Server created on BADSHAH:27305
20/04/15 15:03:56 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
20/04/15 15:03:56 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, BADSHAH, 27305, None)
20/04/15 15:03:56 INFO BlockManagerMasterEndpoint: Registering block manager BADSHAH:27305 with 434.4 MB RAM, BlockManagerId(driver, BADSHAH, 27305, None)
20/04/15 15:03:56 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, BADSHAH, 27305, None)
20/04/15 15:03:56 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, BADSHAH, 27305, None)
20/04/15 15:03:57 INFO SharedState: Setting hive.metastore.warehouse.dir ('null') to the value of spark.sql.warehouse.dir ('file:/D:/work/Tutorial/mySparkApp/spark-warehouse').
20/04/15 15:03:57 INFO SharedState: Warehouse path is 'file:/D:/work/Tutorial/mySparkApp/spark-warehouse'.
20/04/15 15:03:57 INFO StateStoreCoordinatorRef: Registered StateStoreCoordinator endpoint
20/04/15 15:03:57 WARN TextSocketSourceProvider: The socket source should not be used for production applications! It does not support recovery.
20/04/15 15:04:00 INFO CheckpointFileManager: Writing atomically to file:/C:/Users/ABC/AppData/Local/Temp/temporary-21b149c2-dfc1-4208-818f-75dd66487ea4/metadata using temp file file:/C:/Users/ABC/AppData/Local/Temp/temporary-21b149c2-dfc1-4208-818f-75dd66487ea4/.metadata.e51cfe7a-073d-4636-be0b-94beb444a5c1.tmp
20/04/15 15:04:00 INFO CheckpointFileManager: Renamed temp file file:/C:/Users/ABC/AppData/Local/Temp/temporary-21b149c2-dfc1-4208-818f-75dd66487ea4/.metadata.e51cfe7a-073d-4636-be0b-94beb444a5c1.tmp to file:/C:/Users/ABC/AppData/Local/Temp/temporary-21b149c2-dfc1-4208-818f-75dd66487ea4/metadata
20/04/15 15:04:00 INFO MicroBatchExecution: Starting [id = 6ff17b52-19a0-471e-ac81-06e1e35816d4, runId = 6cf60012-ed0a-4702-b943-6164fd0cfcde]. Use file:///C:/Users/ABC/AppData/Local/Temp/temporary-21b149c2-dfc1-4208-818f-75dd66487ea4 to store the query checkpoint.
20/04/15 15:04:00 WARN TextSocketSourceProvider: The socket source should not be used for production applications! It does not support recovery.
20/04/15 15:04:00 INFO MicroBatchExecution: Using MicroBatchReader [TextSocketV2[host: localhost, port: 9999]] from DataSourceV2 named 'socket' [org.apache.spark.sql.execution.streaming.sources.TextSocketSourceProvider@2cf567bb]
20/04/15 15:04:00 INFO MicroBatchExecution: Starting new streaming query.
20/04/15 15:04:00 INFO MicroBatchExecution: Stream started from {}
20/04/15 15:04:00 INFO CheckpointFileManager: Writing atomically to file:/C:/Users/ABC/AppData/Local/Temp/temporary-21b149c2-dfc1-4208-818f-75dd66487ea4/offsets/0 using temp file file:/C:/Users/ABC/AppData/Local/Temp/temporary-21b149c2-dfc1-4208-818f-75dd66487ea4/offsets/.0.eb93c6d8-1b3e-40e5-852c-d28d012bccc3.tmp
20/04/15 15:04:00 INFO CheckpointFileManager: Renamed temp file file:/C:/Users/ABC/AppData/Local/Temp/temporary-21b149c2-dfc1-4208-818f-75dd66487ea4/offsets/.0.eb93c6d8-1b3e-40e5-852c-d28d012bccc3.tmp to file:/C:/Users/ABC/AppData/Local/Temp/temporary-21b149c2-dfc1-4208-818f-75dd66487ea4/offsets/0
20/04/15 15:04:00 INFO MicroBatchExecution: Committed offsets for batch 0. Metadata OffsetSeqMetadata(0,1586943240791,Map(spark.sql.streaming.stateStore.providerClass -> org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider, spark.sql.streaming.flatMapGroupsWithState.stateFormatVersion -> 2, spark.sql.streaming.multipleWatermarkPolicy -> min, spark.sql.streaming.aggregation.stateFormatVersion -> 2, spark.sql.shuffle.partitions -> 200))
20/04/15 15:04:01 INFO CodeGenerator: Code generated in 207.200924 ms
20/04/15 15:04:02 INFO CodeGenerator: Code generated in 38.865513 ms
20/04/15 15:04:02 INFO CodeGenerator: Code generated in 20.050364 ms
20/04/15 15:04:02 ERROR MicroBatchExecution: Query [id = 6ff17b52-19a0-471e-ac81-06e1e35816d4, runId = 6cf60012-ed0a-4702-b943-6164fd0cfcde] terminated with error
java.lang.IllegalArgumentException: Unsupported class file major version 57
        at org.apache.xbean.asm6.ClassReader.<init>(ClassReader.java:166)
        at org.apache.xbean.asm6.ClassReader.<init>(ClassReader.java:148)
        at org.apache.xbean.asm6.ClassReader.<init>(ClassReader.java:136)
        at org.apache.xbean.asm6.ClassReader.<init>(ClassReader.java:237)
        at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:49)
        at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:517)
        at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:500)
        at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
        at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:134)
        at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:134)
        at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:236)
        at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
        at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:134)
        at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
        at org.apache.spark.util.FieldAccessFinder$$anon$3.visitMethodInsn(ClosureCleaner.scala:500)
        at org.apache.xbean.asm6.ClassReader.readCode(ClassReader.java:2175)
        at org.apache.xbean.asm6.ClassReader.readMethod(ClassReader.java:1238)
        at org.apache.xbean.asm6.ClassReader.accept(ClassReader.java:631)
        at org.apache.xbean.asm6.ClassReader.accept(ClassReader.java:355)
        at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:307)
        at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:306)
        at scala.collection.immutable.List.foreach(List.scala:392)
        at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:306)
        at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:162)
        at org.apache.spark.SparkContext.clean(SparkContext.scala:2326)
        at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1.apply(RDD.scala:798)
        at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1.apply(RDD.scala:797)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
        at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
        at org.apache.spark.rdd.RDD.mapPartitions(RDD.scala:797)
        at org.apache.spark.sql.execution.python.EvalPythonExec.doExecute(EvalPythonExec.scala:89)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
        at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
        at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
        at org.apache.spark.sql.execution.GenerateExec.doExecute(GenerateExec.scala:80)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
        at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
        at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
        at org.apache.spark.sql.execution.InputAdapter.inputRDDs(WholeStageCodegenExec.scala:391)
        at org.apache.spark.sql.execution.ProjectExec.inputRDDs(basicPhysicalOperators.scala:41)
        at org.apache.spark.sql.execution.WholeStageCodegenExec.doExecute(WholeStageCodegenExec.scala:627)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
        at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
        at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
        at org.apache.spark.sql.execution.datasources.v2.WriteToDataSourceV2Exec.doExecute(WriteToDataSourceV2Exec.scala:57)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
        at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
        at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
        at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
        at org.apache.spark.sql.execution.SparkPlan.getByteArrayRdd(SparkPlan.scala:247)
        at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:296)
        at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collectFromPlan(Dataset.scala:3383)
        at org.apache.spark.sql.Dataset$$anonfun$collect$1.apply(Dataset.scala:2782)
        at org.apache.spark.sql.Dataset$$anonfun$collect$1.apply(Dataset.scala:2782)
        at org.apache.spark.sql.Dataset$$anonfun$53.apply(Dataset.scala:3364)
        at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78)
        at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
        at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
        at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3363)
        at org.apache.spark.sql.Dataset.collect(Dataset.scala:2782)
        at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$org$apache$spark$sql$execution$streaming$MicroBatchExecution$$runBatch$5$$anonfun$apply$17.apply(MicroBatchExecution.scala:540)
        at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78)
        at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
        at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
        at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$org$apache$spark$sql$execution$streaming$MicroBatchExecution$$runBatch$5.apply(MicroBatchExecution.scala:535)
        at org.apache.spark.sql.execution.streaming.ProgressReporter$class.reportTimeTaken(ProgressReporter.scala:351)
        at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:58)
        at org.apache.spark.sql.execution.streaming.MicroBatchExecution.org$apache$spark$sql$execution$streaming$MicroBatchExecution$$runBatch(MicroBatchExecution.scala:534)
        at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$runActivatedStream$1$$anonfun$apply$mcZ$sp$1.apply$mcV$sp(MicroBatchExecution.scala:198)
        at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$runActivatedStream$1$$anonfun$apply$mcZ$sp$1.apply(MicroBatchExecution.scala:166)
        at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$runActivatedStream$1$$anonfun$apply$mcZ$sp$1.apply(MicroBatchExecution.scala:166)
        at org.apache.spark.sql.execution.streaming.ProgressReporter$class.reportTimeTaken(ProgressReporter.scala:351)
        at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:58)
        at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$runActivatedStream$1.apply$mcZ$sp(MicroBatchExecution.scala:166)
        at org.apache.spark.sql.execution.streaming.ProcessingTimeExecutor.execute(TriggerExecutor.scala:56)
        at org.apache.spark.sql.execution.streaming.MicroBatchExecution.runActivatedStream(MicroBatchExecution.scala:160)
        at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:281)
        at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:193)
20/04/15 15:04:02 ERROR DotnetBackendHandler: methods:
20/04/15 15:04:02 ERROR DotnetBackendHandler: public java.lang.String org.apache.spark.sql.execution.streaming.StreamingQueryWrapper.name()
20/04/15 15:04:02 ERROR DotnetBackendHandler: public java.util.UUID org.apache.spark.sql.execution.streaming.StreamingQueryWrapper.id()
20/04/15 15:04:02 ERROR DotnetBackendHandler: public org.apache.spark.sql.streaming.StreamingQueryStatus org.apache.spark.sql.execution.streaming.StreamingQueryWrapper.status()
20/04/15 15:04:02 ERROR DotnetBackendHandler: public scala.Option org.apache.spark.sql.execution.streaming.StreamingQueryWrapper.exception()
20/04/15 15:04:02 ERROR DotnetBackendHandler: public void org.apache.spark.sql.execution.streaming.StreamingQueryWrapper.stop()
20/04/15 15:04:02 ERROR DotnetBackendHandler: public boolean org.apache.spark.sql.execution.streaming.StreamingQueryWrapper.isActive()
20/04/15 15:04:02 ERROR DotnetBackendHandler: public void org.apache.spark.sql.execution.streaming.StreamingQueryWrapper.awaitTermination()
20/04/15 15:04:02 ERROR DotnetBackendHandler: public boolean org.apache.spark.sql.execution.streaming.StreamingQueryWrapper.awaitTermination(long)
20/04/15 15:04:02 ERROR DotnetBackendHandler: public void org.apache.spark.sql.execution.streaming.StreamingQueryWrapper.processAllAvailable()
20/04/15 15:04:02 ERROR DotnetBackendHandler: public org.apache.spark.sql.streaming.StreamingQueryProgress[] org.apache.spark.sql.execution.streaming.StreamingQueryWrapper.recentProgress()
20/04/15 15:04:02 ERROR DotnetBackendHandler: public org.apache.spark.sql.streaming.StreamingQueryProgress org.apache.spark.sql.execution.streaming.StreamingQueryWrapper.lastProgress()
20/04/15 15:04:02 ERROR DotnetBackendHandler: public java.lang.String org.apache.spark.sql.execution.streaming.StreamingQueryWrapper.explainInternal(boolean)
20/04/15 15:04:02 ERROR DotnetBackendHandler: public org.apache.spark.sql.SparkSession org.apache.spark.sql.execution.streaming.StreamingQueryWrapper.sparkSession()
20/04/15 15:04:02 ERROR DotnetBackendHandler: public void org.apache.spark.sql.execution.streaming.StreamingQueryWrapper.explain()
20/04/15 15:04:02 ERROR DotnetBackendHandler: public void org.apache.spark.sql.execution.streaming.StreamingQueryWrapper.explain(boolean)
20/04/15 15:04:02 ERROR DotnetBackendHandler: public org.apache.spark.sql.execution.streaming.StreamExecution org.apache.spark.sql.execution.streaming.StreamingQueryWrapper.streamingQuery()
20/04/15 15:04:02 ERROR DotnetBackendHandler: public java.util.UUID org.apache.spark.sql.execution.streaming.StreamingQueryWrapper.runId()
20/04/15 15:04:02 ERROR DotnetBackendHandler: public final native void java.lang.Object.wait(long) throws java.lang.InterruptedException
20/04/15 15:04:02 ERROR DotnetBackendHandler: public final void java.lang.Object.wait(long,int) throws java.lang.InterruptedException
20/04/15 15:04:02 ERROR DotnetBackendHandler: public final void java.lang.Object.wait() throws java.lang.InterruptedException
20/04/15 15:04:02 ERROR DotnetBackendHandler: public boolean java.lang.Object.equals(java.lang.Object)
20/04/15 15:04:02 ERROR DotnetBackendHandler: public java.lang.String java.lang.Object.toString()
20/04/15 15:04:02 ERROR DotnetBackendHandler: public native int java.lang.Object.hashCode()
20/04/15 15:04:02 ERROR DotnetBackendHandler: public final native java.lang.Class java.lang.Object.getClass()
20/04/15 15:04:02 ERROR DotnetBackendHandler: public final native void java.lang.Object.notify()
20/04/15 15:04:02 ERROR DotnetBackendHandler: public final native void java.lang.Object.notifyAll()
20/04/15 15:04:02 ERROR DotnetBackendHandler: args:
[2020-04-15T09:34:02.2640124Z] [BADSHAH] [Error] [JvmBridge] JVM method execution failed: Nonstatic method awaitTermination failed for class 23 when called with no arguments
[2020-04-15T09:34:02.2641137Z] [BADSHAH] [Error] [JvmBridge] org.apache.spark.sql.streaming.StreamingQueryException: Unsupported class file major version 57
=== Streaming Query ===
Identifier: [id = 6ff17b52-19a0-471e-ac81-06e1e35816d4, runId = 6cf60012-ed0a-4702-b943-6164fd0cfcde]
Current Committed Offsets: {}
Current Available Offsets: {TextSocketV2[host: localhost, port: 9999]: -1}

Current State: ACTIVE
Thread State: RUNNABLE

Logical Plan:
Project [col#3]
+- Generate explode(System.String[] <Main>b__0_0(System.String)(value#0)), false, [col#3]
   +- StreamingExecutionRelation TextSocketV2[host: localhost, port: 9999], [value#0]

        at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:297)
        at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:193)
Caused by: java.lang.IllegalArgumentException: Unsupported class file major version 57
        at org.apache.xbean.asm6.ClassReader.<init>(ClassReader.java:166)
        at org.apache.xbean.asm6.ClassReader.<init>(ClassReader.java:148)
        at org.apache.xbean.asm6.ClassReader.<init>(ClassReader.java:136)
        at org.apache.xbean.asm6.ClassReader.<init>(ClassReader.java:237)
        at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:49)
        at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:517)
        at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:500)
        at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
        at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:134)
        at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:134)
        at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:236)
        at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
        at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:134)
        at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
        at org.apache.spark.util.FieldAccessFinder$$anon$3.visitMethodInsn(ClosureCleaner.scala:500)
        at org.apache.xbean.asm6.ClassReader.readCode(ClassReader.java:2175)
        at org.apache.xbean.asm6.ClassReader.readMethod(ClassReader.java:1238)
        at org.apache.xbean.asm6.ClassReader.accept(ClassReader.java:631)
        at org.apache.xbean.asm6.ClassReader.accept(ClassReader.java:355)
        at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:307)
        at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:306)
        at scala.collection.immutable.List.foreach(List.scala:392)

[2020-04-15T09:34:02.3262154Z] [BADSHAH] [Exception] [JvmBridge] JVM method execution failed: Nonstatic method awaitTermination failed for class 23 when called with no arguments
   at Microsoft.Spark.Interop.Ipc.JvmBridge.CallJavaMethod(Boolean isStatic, Object classNameOrJvmObjectReference, String methodName, Object[] args)
Unhandled exception. System.Exception: JVM method execution failed: Nonstatic method awaitTermination failed for class 23 when called with no arguments
   at Microsoft.Spark.Interop.Ipc.JvmBridge.CallJavaMethod(Boolean isStatic, Object classNameOrJvmObjectReference, String methodName, Object[] args)
   at Microsoft.Spark.Interop.Ipc.JvmBridge.CallNonStaticJavaMethod(JvmObjectReference objectId, String methodName, Object[] args)
   at Microsoft.Spark.Interop.Ipc.JvmObjectReference.Invoke(String methodName, Object[] args)
   at Microsoft.Spark.Sql.Streaming.StreamingQuery.AwaitTermination()
   at mySparkApp.Program.Main(String[] args) in D:\work\Tutorial\mySparkApp\Program.cs:line 35
20/04/15 15:04:02 INFO DotnetRunner: Closing DotnetBackend
20/04/15 15:04:02 INFO DotnetBackend: Requesting to close all call back sockets
20/04/15 15:04:02 INFO SparkContext: Invoking stop() from shutdown hook
20/04/15 15:04:02 INFO SparkUI: Stopped Spark web UI at http://BADSHAH:4040
20/04/15 15:04:02 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
20/04/15 15:04:02 INFO MemoryStore: MemoryStore cleared
20/04/15 15:04:02 INFO BlockManager: BlockManager stopped
20/04/15 15:04:02 INFO BlockManagerMaster: BlockManagerMaster stopped
20/04/15 15:04:02 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
20/04/15 15:04:02 INFO SparkContext: Successfully stopped SparkContext
20/04/15 15:04:02 INFO ShutdownHookManager: Shutdown hook called
20/04/15 15:04:02 INFO ShutdownHookManager: Deleting directory C:\Users\ABC\AppData\Local\Temp\spark-c13bbf20-d0eb-4d04-8d13-b754e2fda521
20/04/15 15:04:02 INFO ShutdownHookManager: Deleting directory C:\Users\ABC\AppData\Local\Temp\temporary-21b149c2-dfc1-4208-818f-75dd66487ea4
20/04/15 15:04:02 INFO ShutdownHookManager: Deleting directory C:\Users\ABC\AppData\Local\Temp\spark-5d71248e-3732-4a63-ae51-3358df8caf2b
20/04/15 15:04:02 INFO ShutdownHookManager: Deleting directory C:\Users\ABC\AppData\Local\Temp\temporaryReader-ae412f71-95a9-4741-a2be-cc3fd5f716df
Добро пожаловать на сайт PullRequest, где вы можете задавать вопросы и получать ответы от других членов сообщества.
...