Мои задания PySpark работают нормально в локальном режиме, но не работают в кластерном режиме - решено - PullRequest
0 голосов
/ 25 февраля 2020

У меня есть кластер Hadoop / Spark с четырьмя узлами, работающий в AWS. Я могу отправлять и отлично выполнять задания в локальном режиме:

spark-submit --master local[*] myscript.py

Но когда я пытаюсь запустить скрипт в режиме кластера, он не работает. Я просто пробую кластерный эквивалент "hello world":

spark-submit spark-yarn.py

Где сценарий, который был рекомендован:

from pyspark import SparkConf
from pyspark import SparkContext

conf = SparkConf()
conf.setMaster('yarn')
conf.setAppName('spark-yarn')
sc = SparkContext(conf=conf)


def mod(x):
    import numpy as np
    return (x, np.mod(x, 2))

rdd = sc.parallelize(range(1000)).map(mod).take(10)
print(rdd)

Я провел дни, просматривая каждый Журнал я могу найти и прочитать все, что я могу в Интернете, но ничто не помогло мне добраться до root, почему он не работает. Прежде чем я сломаю все серверы и начну все сначала, я надеюсь, что кто-нибудь может указать мне правильное направление, чтобы заставить это работать.

Вот вывод в терминале:

20/02/25 12:59:51 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
20/02/25 13:00:11 ERROR YarnClientSchedulerBackend: YARN application has exited unexpectedly with state FAILED! Check the YARN application logs for more details.
20/02/25 13:00:11 ERROR YarnClientSchedulerBackend: Diagnostics message: Application application_1582603840719_0002 failed 2 times due to AM Container for appattempt_1582603840719_0002_000002 exited with  exitCode: -103
Failing this attempt.Diagnostics: [2020-02-25 13:00:11.601]Container [pid=3124,containerID=container_1582603840719_0002_02_000001] is running beyond virtual memory limits. Current usage: 328.7 MB of 1 GB physical memory used; 2.2 GB of 2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1582603840719_0002_02_000001 :
    |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
    |- 3128 3124 3124 3124 (java) 504 34 2359349248 83396 /usr/lib/jvm/java-8-openjdk-amd64/bin/java -server -Xmx512m -Djava.io.tmpdir=/tmp/hadoop-ubuntu/nm-local-dir/usercache/ubuntu/appcache/application_1582603840719_0002/container_1582603840719_0002_02_000001/tmp -Dspark.yarn.app.container.log.dir=/home/ubuntu/server/hadoop-2.9.2/logs/userlogs/application_1582603840719_0002/container_1582603840719_0002_02_000001 org.apache.spark.deploy.yarn.ExecutorLauncher --arg ip-172-31-7-96.ec2.internal:43275 --properties-file /tmp/hadoop-ubuntu/nm-local-dir/usercache/ubuntu/appcache/application_1582603840719_0002/container_1582603840719_0002_02_000001/__spark_conf__/__spark_conf__.properties 
    |- 3124 3122 3124 3124 (bash) 0 0 13635584 760 /bin/bash -c /usr/lib/jvm/java-8-openjdk-amd64/bin/java -server -Xmx512m -Djava.io.tmpdir=/tmp/hadoop-ubuntu/nm-local-dir/usercache/ubuntu/appcache/application_1582603840719_0002/container_1582603840719_0002_02_000001/tmp -Dspark.yarn.app.container.log.dir=/home/ubuntu/server/hadoop-2.9.2/logs/userlogs/application_1582603840719_0002/container_1582603840719_0002_02_000001 org.apache.spark.deploy.yarn.ExecutorLauncher --arg 'ip-172-31-7-96.ec2.internal:43275' --properties-file /tmp/hadoop-ubuntu/nm-local-dir/usercache/ubuntu/appcache/application_1582603840719_0002/container_1582603840719_0002_02_000001/__spark_conf__/__spark_conf__.properties 1> /home/ubuntu/server/hadoop-2.9.2/logs/userlogs/application_1582603840719_0002/container_1582603840719_0002_02_000001/stdout 2> /home/ubuntu/server/hadoop-2.9.2/logs/userlogs/application_1582603840719_0002/container_1582603840719_0002_02_000001/stderr 

[2020-02-25 13:00:11.651]Container killed on request. Exit code is 143
[2020-02-25 13:00:11.658]Container exited with a non-zero exit code 143. 
For more detailed output, check the application tracking page: http://ec2-34-200-223-235.compute-1.amazonaws.com:8088/cluster/app/application_1582603840719_0002 Then click on links to logs of each attempt.
. Failing the application.
20/02/25 13:00:11 ERROR TransportClient: Failed to send RPC RPC 6867152665638655473 to /172.31.9.94:57526: java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
    at io.netty.channel.AbstractChannel$AbstractUnsafe.write(...)(Unknown Source)
20/02/25 13:00:11 ERROR YarnSchedulerBackend$YarnSchedulerEndpoint: Sending RequestExecutors(0,0,Map(),Set()) to AM was unsuccessful
java.io.IOException: Failed to send RPC RPC 6867152665638655473 to /172.31.9.94:57526: java.nio.channels.ClosedChannelException
    at org.apache.spark.network.client.TransportClient$RpcChannelListener.handleFailure(TransportClient.java:362)
    at org.apache.spark.network.client.TransportClient$StdChannelListener.operationComplete(TransportClient.java:339)
    at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:507)
    at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:481)
    at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:420)
    at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:122)
    at io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetFailure(AbstractChannel.java:987)
    at io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:869)
    at io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1316)
    at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:738)
    at io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:730)
    at io.netty.channel.AbstractChannelHandlerContext.access$1900(AbstractChannelHandlerContext.java:38)
    at io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.write(AbstractChannelHandlerContext.java:1081)
    at io.netty.channel.AbstractChannelHandlerContext$WriteAndFlushTask.write(AbstractChannelHandlerContext.java:1128)
    at io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.run(AbstractChannelHandlerContext.java:1070)
    at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
    at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:403)
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:463)
    at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
    at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.nio.channels.ClosedChannelException
    at io.netty.channel.AbstractChannel$AbstractUnsafe.write(...)(Unknown Source)
20/02/25 13:00:11 ERROR Utils: Uncaught exception in thread YARN application state monitor
org.apache.spark.SparkException: Exception thrown in awaitResult: 
    at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:226)
    at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75)
    at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.requestTotalExecutors(CoarseGrainedSchedulerBackend.scala:574)
    at org.apache.spark.scheduler.cluster.YarnSchedulerBackend.stop(YarnSchedulerBackend.scala:98)
    at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.stop(YarnClientSchedulerBackend.scala:164)
    at org.apache.spark.scheduler.TaskSchedulerImpl.stop(TaskSchedulerImpl.scala:653)
    at org.apache.spark.scheduler.DAGScheduler.stop(DAGScheduler.scala:2042)
    at org.apache.spark.SparkContext$$anonfun$stop$6.apply$mcV$sp(SparkContext.scala:1949)
    at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1340)
    at org.apache.spark.SparkContext.stop(SparkContext.scala:1948)
    at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend$MonitorThread.run(YarnClientSchedulerBackend.scala:121)
Caused by: java.io.IOException: Failed to send RPC RPC 6867152665638655473 to /172.31.9.94:57526: java.nio.channels.ClosedChannelException
    at org.apache.spark.network.client.TransportClient$RpcChannelListener.handleFailure(TransportClient.java:362)
    at org.apache.spark.network.client.TransportClient$StdChannelListener.operationComplete(TransportClient.java:339)
    at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:507)
    at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:481)
    at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:420)
    at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:122)
    at io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetFailure(AbstractChannel.java:987)
    at io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:869)
    at io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1316)
    at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:738)
    at io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:730)
    at io.netty.channel.AbstractChannelHandlerContext.access$1900(AbstractChannelHandlerContext.java:38)
    at io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.write(AbstractChannelHandlerContext.java:1081)
    at io.netty.channel.AbstractChannelHandlerContext$WriteAndFlushTask.write(AbstractChannelHandlerContext.java:1128)
    at io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.run(AbstractChannelHandlerContext.java:1070)
    at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
    at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:403)
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:463)
    at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
    at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.nio.channels.ClosedChannelException
    at io.netty.channel.AbstractChannel$AbstractUnsafe.write(...)(Unknown Source)
20/02/25 13:00:12 ERROR SparkContext: Error initializing SparkContext.
java.lang.IllegalStateException: Spark context stopped while waiting for backend
    at org.apache.spark.scheduler.TaskSchedulerImpl.waitBackendReady(TaskSchedulerImpl.scala:818)
    at org.apache.spark.scheduler.TaskSchedulerImpl.postStartHook(TaskSchedulerImpl.scala:196)
    at org.apache.spark.SparkContext.<init>(SparkContext.scala:560)
    at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:238)
    at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80)
    at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69)
    at py4j.GatewayConnection.run(GatewayConnection.java:238)
    at java.lang.Thread.run(Thread.java:748)
Traceback (most recent call last):
  File "/home/ubuntu/server/spark-yarn.py", line 7, in <module>
    sc = SparkContext(conf=conf)
  File "/home/ubuntu/server/spark-2.4.4-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/context.py", line 136, in __init__
  File "/home/ubuntu/server/spark-2.4.4-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/context.py", line 198, in _do_init
  File "/home/ubuntu/server/spark-2.4.4-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/context.py", line 306, in _initialize_context
  File "/home/ubuntu/server/spark-2.4.4-bin-hadoop2.7/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1525, in __call__
  File "/home/ubuntu/server/spark-2.4.4-bin-hadoop2.7/python/lib/py4j-0.10.7-src.zip/py4j/protocol.py", line 328, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling None.org.apache.spark.api.java.JavaSparkContext.
: java.lang.IllegalStateException: Spark context stopped while waiting for backend
    at org.apache.spark.scheduler.TaskSchedulerImpl.waitBackendReady(TaskSchedulerImpl.scala:818)
    at org.apache.spark.scheduler.TaskSchedulerImpl.postStartHook(TaskSchedulerImpl.scala:196)
    at org.apache.spark.SparkContext.<init>(SparkContext.scala:560)
    at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:238)
    at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80)
    at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69)
    at py4j.GatewayConnection.run(GatewayConnection.java:238)
    at java.lang.Thread.run(Thread.java:748)

Если я заменю «spark» на «spark-client» в качестве основного, это выдаст немного другую ошибку:

20/02/25 13:07:31 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
20/02/25 13:07:46 ERROR TransportClient: Failed to send RPC RPC 5381013595535555066 to /172.31.5.228:39748: java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
    at io.netty.channel.AbstractChannel$AbstractUnsafe.write(...)(Unknown Source)
20/02/25 13:07:46 ERROR YarnScheduler: Lost executor 1 on ip-172-31-5-228.ec2.internal: Slave lost
20/02/25 13:07:51 ERROR YarnClientSchedulerBackend: YARN application has exited unexpectedly with state FAILED! Check the YARN application logs for more details.
20/02/25 13:07:51 ERROR YarnClientSchedulerBackend: Diagnostics message: Application application_1582603840719_0003 failed 2 times due to AM Container for appattempt_1582603840719_0003_000002 exited with  exitCode: -103
Failing this attempt.Diagnostics: [2020-02-25 13:07:51.067]Container [pid=3223,containerID=container_1582603840719_0003_02_000001] is running beyond virtual memory limits. Current usage: 320.8 MB of 1 GB physical memory used; 2.2 GB of 2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1582603840719_0003_02_000001 :
    |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
    |- 3227 3223 3223 3223 (java) 489 32 2355855360 81352 /usr/lib/jvm/java-8-openjdk-amd64/bin/java -server -Xmx512m -Djava.io.tmpdir=/tmp/hadoop-ubuntu/nm-local-dir/usercache/ubuntu/appcache/application_1582603840719_0003/container_1582603840719_0003_02_000001/tmp -Dspark.yarn.app.container.log.dir=/home/ubuntu/server/hadoop-2.9.2/logs/userlogs/application_1582603840719_0003/container_1582603840719_0003_02_000001 org.apache.spark.deploy.yarn.ExecutorLauncher --arg ip-172-31-7-96.ec2.internal:40963 --properties-file /tmp/hadoop-ubuntu/nm-local-dir/usercache/ubuntu/appcache/application_1582603840719_0003/container_1582603840719_0003_02_000001/__spark_conf__/__spark_conf__.properties 
    |- 3223 3221 3223 3223 (bash) 0 0 13635584 767 /bin/bash -c /usr/lib/jvm/java-8-openjdk-amd64/bin/java -server -Xmx512m -Djava.io.tmpdir=/tmp/hadoop-ubuntu/nm-local-dir/usercache/ubuntu/appcache/application_1582603840719_0003/container_1582603840719_0003_02_000001/tmp -Dspark.yarn.app.container.log.dir=/home/ubuntu/server/hadoop-2.9.2/logs/userlogs/application_1582603840719_0003/container_1582603840719_0003_02_000001 org.apache.spark.deploy.yarn.ExecutorLauncher --arg 'ip-172-31-7-96.ec2.internal:40963' --properties-file /tmp/hadoop-ubuntu/nm-local-dir/usercache/ubuntu/appcache/application_1582603840719_0003/container_1582603840719_0003_02_000001/__spark_conf__/__spark_conf__.properties 1> /home/ubuntu/server/hadoop-2.9.2/logs/userlogs/application_1582603840719_0003/container_1582603840719_0003_02_000001/stdout 2> /home/ubuntu/server/hadoop-2.9.2/logs/userlogs/application_1582603840719_0003/container_1582603840719_0003_02_000001/stderr 

[2020-02-25 13:07:51.089]Container killed on request. Exit code is 143
[2020-02-25 13:07:51.090]Container exited with a non-zero exit code 143. 
For more detailed output, check the application tracking page: http://ec2-34-200-223-235.compute-1.amazonaws.com:8088/cluster/app/application_1582603840719_0003 Then click on links to logs of each attempt.
. Failing the application.
20/02/25 13:07:51 ERROR SparkContext: Error initializing SparkContext.
java.lang.IllegalStateException: Spark context stopped while waiting for backend
    at org.apache.spark.scheduler.TaskSchedulerImpl.waitBackendReady(TaskSchedulerImpl.scala:818)
    at org.apache.spark.scheduler.TaskSchedulerImpl.postStartHook(TaskSchedulerImpl.scala:196)
    at org.apache.spark.SparkContext.<init>(SparkContext.scala:560)
    at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:238)
    at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80)
    at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69)
    at py4j.GatewayConnection.run(GatewayConnection.java:238)
    at java.lang.Thread.run(Thread.java:748)
20/02/25 13:07:52 ERROR TransportClient: Failed to send RPC RPC 8397804982944513692 to /172.31.0.102:39468: java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
    at io.netty.channel.AbstractChannel$AbstractUnsafe.write(...)(Unknown Source)
20/02/25 13:07:52 ERROR YarnSchedulerBackend$YarnSchedulerEndpoint: Sending RequestExecutors(0,0,Map(),Set()) to AM was unsuccessful
java.io.IOException: Failed to send RPC RPC 8397804982944513692 to /172.31.0.102:39468: java.nio.channels.ClosedChannelException
    at org.apache.spark.network.client.TransportClient$RpcChannelListener.handleFailure(TransportClient.java:362)
    at org.apache.spark.network.client.TransportClient$StdChannelListener.operationComplete(TransportClient.java:339)
    at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:507)
    at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:481)
    at io.netty.util.concurrent.DefaultPromise.access$000(DefaultPromise.java:34)
    at io.netty.util.concurrent.DefaultPromise$1.run(DefaultPromise.java:431)
    at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
    at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:403)
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:463)
    at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
    at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.nio.channels.ClosedChannelException
    at io.netty.channel.AbstractChannel$AbstractUnsafe.write(...)(Unknown Source)
20/02/25 13:07:52 ERROR Utils: Uncaught exception in thread YARN application state monitor
org.apache.spark.SparkException: Exception thrown in awaitResult: 
    at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:226)
    at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75)
    at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.requestTotalExecutors(CoarseGrainedSchedulerBackend.scala:574)
    at org.apache.spark.scheduler.cluster.YarnSchedulerBackend.stop(YarnSchedulerBackend.scala:98)
    at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.stop(YarnClientSchedulerBackend.scala:164)
    at org.apache.spark.scheduler.TaskSchedulerImpl.stop(TaskSchedulerImpl.scala:653)
    at org.apache.spark.scheduler.DAGScheduler.stop(DAGScheduler.scala:2042)
    at org.apache.spark.SparkContext$$anonfun$stop$6.apply$mcV$sp(SparkContext.scala:1949)
    at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1340)
    at org.apache.spark.SparkContext.stop(SparkContext.scala:1948)
    at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend$MonitorThread.run(YarnClientSchedulerBackend.scala:121)
Caused by: java.io.IOException: Failed to send RPC RPC 8397804982944513692 to /172.31.0.102:39468: java.nio.channels.ClosedChannelException
    at org.apache.spark.network.client.TransportClient$RpcChannelListener.handleFailure(TransportClient.java:362)
    at org.apache.spark.network.client.TransportClient$StdChannelListener.operationComplete(TransportClient.java:339)
    at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:507)
    at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:481)
    at io.netty.util.concurrent.DefaultPromise.access$000(DefaultPromise.java:34)
    at io.netty.util.concurrent.DefaultPromise$1.run(DefaultPromise.java:431)
    at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
    at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:403)
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:463)
    at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
    at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.nio.channels.ClosedChannelException
    at io.netty.channel.AbstractChannel$AbstractUnsafe.write(...)(Unknown Source)
Traceback (most recent call last):
  File "/home/ubuntu/server/spark-yarn.py", line 7, in <module>
    sc = SparkContext(conf=conf)
  File "/home/ubuntu/server/spark-2.4.4-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/context.py", line 136, in __init__
  File "/home/ubuntu/server/spark-2.4.4-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/context.py", line 198, in _do_init
  File "/home/ubuntu/server/spark-2.4.4-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/context.py", line 306, in _initialize_context
  File "/home/ubuntu/server/spark-2.4.4-bin-hadoop2.7/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1525, in __call__
  File "/home/ubuntu/server/spark-2.4.4-bin-hadoop2.7/python/lib/py4j-0.10.7-src.zip/py4j/protocol.py", line 328, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling None.org.apache.spark.api.java.JavaSparkContext.
: java.lang.IllegalStateException: Spark context stopped while waiting for backend
    at org.apache.spark.scheduler.TaskSchedulerImpl.waitBackendReady(TaskSchedulerImpl.scala:818)
    at org.apache.spark.scheduler.TaskSchedulerImpl.postStartHook(TaskSchedulerImpl.scala:196)
    at org.apache.spark.SparkContext.<init>(SparkContext.scala:560)
    at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:238)
    at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80)
    at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69)
    at py4j.GatewayConnection.run(GatewayConnection.java:238)
    at java.lang.Thread.run(Thread.java:748)

В нем упоминается проверка журналов по адресу:

http://ec2-34-200-223-235.compute-1.amazonaws.com:8088/cluster/app/application_1582603840719_0003

Но нажатие в любой из ссылок журнала на этой странице выдается сообщение об ошибке:

Firefox can’t establish a connection to the server at ip-172-31-0-102.ec2.internal:8042.

(Это, вероятно, не связано.)

Сгребая предупреждения, я вижу следующее:

2020-02-25 13:07:38,904 WARN org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: The specific max attempts: 0 for application: 3 is invalid, because it is out of the range [1, 2]. Use the global max attempts instead.
2020-02-25 13:07:51,241 WARN org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=ubuntOPERATION=Application Finished - Failed TARGET=RMAppManager RESULT=FAILURE  DESCRIPTION=App failed with state: FAILED   PERMISSIONS=Application application_1582603840719_0003 failed 2 times due to AM Container for appattempt_1582603840719_0003_000002 exited with  exitCode: -103
2020-02-25 13:07:40,367 WARN org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Checkpoint done. New Image Size: 12086

Не было сгенерировано ошибок.

Попытка журналов пряжи из командной строки покажет мне задания:

ubuntu@ip-172-31-7-96:~/server$ yarn application -list -appStates ALL
20/02/25 13:31:44 INFO client.RMProxy: Connecting to ResourceManager at ec2-34-200-223-235.compute-1.amazonaws.com/172.31.7.96:8032
Total number of applications (application-types: [], states: [NEW, NEW_SAVING, SUBMITTED, ACCEPTED, RUNNING, FINISHED, FAILED, KILLED] and tags: []):3
                Application-Id      Application-Name        Application-Type          User       Queue               State         Final-State         Progress                        Tracking-URL
application_1582603840719_0001            spark-yarn                   SPARK        ubuntu     default            FINISHED           UNDEFINED             100%                                 N/A

Но запрос журналов завершается неудачно:

ubuntu@ip-172-31-7-96:~/server$ yarn logs -applicationId application_1582603840719_0001
20/02/25 13:32:48 INFO client.RMProxy: Connecting to ResourceManager at ec2-34-200-223-235.compute-1.amazonaws.com/172.31.7.96:8032
fs.AbstractFileSystem.ec2-34-200-223-235.compute-1.amazonaws.com.impl=null: No AbstractFileSystem configured for scheme: ec2-34-200-223-235.compute-1.amazonaws.com

Can not find any log file matching the pattern: [ALL] for the application: application_1582603840719_0001
Can not find the logs for the application: application_1582603840719_0001 with the appOwner: ubuntu

Опять же, если бы кто-то мог направить меня к следующим шагам по устранению неполадок, я был бы очень благодарен. Я потратил несколько дней на это и, похоже, не добился прогресса.

1 Ответ

0 голосов
/ 26 февраля 2020

Две из этих вещей в итоге решили эту проблему:

Сначала я добавил следующие строки ко всем узлам в yarn-site. xml file:

  <property>
    <name>yarn.nodemanager.pmem-check-enabled</name>
    <value>false</value>
  </property>
  <property>
    <name>yarn.nodemanager.vmem-check-enabled</name>
    <value>false</value>
  </property>

Next Я изменил свою команду spark-submit, добавив в нее следующие строки, чтобы дать драйверу больше памяти:

spark-submit --master yarn \
--deploy-mode client \
--driver-memory 6g \
--executor-memory 6g \
--executor-cores 2 \
--num-executors 10 \
my_app.py
...