Рабочие контейнеры Spark завершаются командой, завершенной с кодом 1 [не удается подключиться к локальному ПК] - PullRequest
0 голосов
/ 27 мая 2019

У меня есть установка искрового кластера на докере.используя openjdk: 8-alpine.Я бегу по местному. Имя локального компьютера: DESKTOP-PCH5L6D

Я могу отправить задания, но ошибка, которую я вижу в рабочем док-контейнере, вызвана: java.net.UnknownHostException: DESKTOP-PCH5L6D

локальный компьютер работает на Win 10 Pro.

Я новичок в докере и ищу помощь в том, как решить эту проблему.Файл создания докера используется для создания локального искрового кластера

version: "3.7"
services:
  spark-master:
    image: cmishr4/spark:latest
    container_name: spark-master
    hostname: spark-master
    ports:
      - "8080:8080"
      - "7077:7077"
    volumes:
      - ./../share:/share
    privileged: true
    networks:
      - spark-network
    environment:
      - "SPARK_MASTER_PORT=7077"
      - "SPARK_MASTER_WEBUI_PORT=8080"
    command: "sh start-master.sh"
  spark-worker:
    image: cmishr4/spark:latest
    depends_on:
      - spark-master
    ports:
      - 8080
    volumes:
      - ./../share:/share
    privileged: true
    networks:
      - spark-network
    environment:
      - "SPARK_MASTER=spark://spark-master:7077"
      - "SPARK_WORKER_WEBUI_PORT=8080"
    command: "sh start-worker.sh"
networks:
  spark-network:
    driver: bridge
    ipam:
      driver: default

Основной метод класса Scala:

val spark = SparkSession.builder
  .appName("sample")
  .master("spark://localhost:7077")
  .config("spark.executor.cores", "1")
  .config("spark.executor.memory","1g")
  .getOrCreate()
val csv = spark.sparkContext.textFile("C:/Users/chand/wrkspcs/sparkws/share/input")
val rows = csv.map(line => line.split(",").map(_.trim))
val header = rows.first
val data = rows.filter(_(0) != header(0))
val rdd = data.map(row => Row(row(0), row(1).toInt))

val schema = new StructType()
  .add(StructField("name", StringType, true))
  .add(StructField("age", IntegerType, true))

val df = spark.sqlContext.createDataFrame(rdd, schema)
spark.stop()

журнал из основного класса

Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
19/05/27 02:25:55 INFO SparkContext: Running Spark version 2.4.3
19/05/27 02:25:55 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
19/05/27 02:25:55 INFO SparkContext: Submitted application: sample
19/05/27 02:25:56 INFO SecurityManager: Changing view acls to: chand
19/05/27 02:25:56 INFO SecurityManager: Changing modify acls to: chand
19/05/27 02:25:56 INFO SecurityManager: Changing view acls groups to: 
19/05/27 02:25:56 INFO SecurityManager: Changing modify acls groups to: 
19/05/27 02:25:56 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(chand); groups with view permissions: Set(); users  with modify permissions: Set(chand); groups with modify permissions: Set()
<b>19/05/27 02:25:57 INFO Utils: Successfully started service 'sparkDriver' on port 53511.</b>
19/05/27 02:25:57 INFO SparkEnv: Registering MapOutputTracker
19/05/27 02:25:57 INFO SparkEnv: Registering BlockManagerMaster
19/05/27 02:25:57 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
19/05/27 02:25:57 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
19/05/27 02:25:57 INFO DiskBlockManager: Created local directory at C:\Users\chand\AppData\Local\Temp\blockmgr-c4f6d60f-c867-4c52-895d-111030c92193
19/05/27 02:25:57 INFO MemoryStore: MemoryStore started with capacity 4.1 GB
19/05/27 02:25:57 INFO SparkEnv: Registering OutputCommitCoordinator
19/05/27 02:25:57 INFO Utils: Successfully started service 'SparkUI' on port 4040.
19/05/27 02:25:57 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://DESKTOP-PCH5L6D:4040
19/05/27 02:25:57 INFO StandaloneAppClient$ClientEndpoint: Connecting to master spark://localhost:7077...
19/05/27 02:25:57 INFO TransportClientFactory: Successfully created connection to localhost/127.0.0.1:7077 after 27 ms (0 ms spent in bootstraps)
19/05/27 02:25:57 INFO StandaloneSchedulerBackend: Connected to Spark cluster with app ID app-20190527062557-0000
19/05/27 02:25:57 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 53553.
19/05/27 02:25:57 INFO NettyBlockTransferService: Server created on DESKTOP-PCH5L6D:53553
19/05/27 02:25:57 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
19/05/27 02:25:57 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, DESKTOP-PCH5L6D, 53553, None)
19/05/27 02:25:57 INFO StandaloneAppClient$ClientEndpoint: Executor added: app-20190527062557-0000/0 on worker-20190527062533-172.18.0.3-44351 (172.18.0.3:44351) with 1 core(s)
19/05/27 02:25:57 INFO StandaloneSchedulerBackend: Granted executor ID app-20190527062557-0000/0 on hostPort 172.18.0.3:44351 with 1 core(s), 1024.0 MB RAM
19/05/27 02:25:57 INFO BlockManagerMasterEndpoint: Registering block manager DESKTOP-PCH5L6D:53553 with 4.1 GB RAM, BlockManagerId(driver, DESKTOP-PCH5L6D, 53553, None)
19/05/27 02:25:57 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, DESKTOP-PCH5L6D, 53553, None)
19/05/27 02:25:57 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, DESKTOP-PCH5L6D, 53553, None)
19/05/27 02:25:57 INFO StandaloneAppClient$ClientEndpoint: Executor updated: app-20190527062557-0000/0 is now RUNNING
19/05/27 02:25:57 INFO StandaloneSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
19/05/27 02:25:58 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 214.6 KB, free 4.1 GB)
19/05/27 02:25:58 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 20.4 KB, free 4.1 GB)
19/05/27 02:25:58 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on DESKTOP-PCH5L6D:53553 (size: 20.4 KB, free: 4.1 GB)
19/05/27 02:25:58 INFO SparkContext: Created broadcast 0 from textFile at App.scala:24
19/05/27 02:25:58 INFO FileInputFormat: Total input paths to process : 2
19/05/27 02:25:58 INFO SparkContext: Starting job: first at App.scala:26
19/05/27 02:25:58 INFO DAGScheduler: Got job 0 (first at App.scala:26) with 1 output partitions
19/05/27 02:25:58 INFO DAGScheduler: Final stage: ResultStage 0 (first at App.scala:26)
19/05/27 02:25:58 INFO DAGScheduler: Parents of final stage: List()
19/05/27 02:25:58 INFO DAGScheduler: Missing parents: List()
19/05/27 02:25:58 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[2] at map at App.scala:25), which has no missing parents
19/05/27 02:25:58 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 3.6 KB, free 4.1 GB)
19/05/27 02:25:58 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 2.2 KB, free 4.1 GB)
19/05/27 02:25:58 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on DESKTOP-PCH5L6D:53553 (size: 2.2 KB, free: 4.1 GB)
19/05/27 02:25:58 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1161
19/05/27 02:25:58 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 0 (MapPartitionsRDD[2] at map at App.scala:25) (first 15 tasks are for partitions Vector(0))
19/05/27 02:25:58 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks
19/05/27 02:25:59 INFO StandaloneAppClient$ClientEndpoint: Executor updated: app-20190527062557-0000/0 is now EXITED (Command exited with code 1)
19/05/27 02:25:59 INFO StandaloneSchedulerBackend: Executor app-20190527062557-0000/0 removed: Command exited with code 1
19/05/27 02:25:59 INFO StandaloneAppClient$ClientEndpoint: Executor added: app-20190527062557-0000/1 on worker-20190527062533-172.18.0.3-44351 (172.18.0.3:44351) with 1 core(s)
19/05/27 02:25:59 INFO StandaloneSchedulerBackend: Granted executor ID app-20190527062557-0000/1 on hostPort 172.18.0.3:44351 with 1 core(s), 1024.0 MB RAM
19/05/27 02:25:59 INFO BlockManagerMaster: Removal of executor 0 requested
19/05/27 02:25:59 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Asked to remove non-existent executor 0
19/05/27 02:25:59 INFO BlockManagerMasterEndpoint: Trying to remove executor 0 from BlockManagerMaster.
19/05/27 02:25:59 INFO StandaloneAppClient$ClientEndpoint: Executor updated: app-20190527062557-0000/1 is now RUNNING
19/05/27 02:26:01 INFO StandaloneAppClient$ClientEndpoint: Executor updated: app-20190527062557-0000/1 is now EXITED (Command exited with code 1)
19/05/27 02:26:01 INFO StandaloneSchedulerBackend: Executor app-20190527062557-0000/1 removed: Command exited with code 1
19/05/27 02:26:01 INFO BlockManagerMasterEndpoint: Trying to remove executor 1 from BlockManagerMaster.
19/05/27 02:26:01 INFO BlockManagerMaster: Removal of executor 1 requested
19/05/27 02:26:01 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Asked to remove non-existent executor 1
19/05/27 02:26:01 INFO StandaloneAppClient$ClientEndpoint: Executor added: app-20190527062557-0000/2 on worker-20190527062533-172.18.0.3-44351 (172.18.0.3:44351) with 1 core(s)
19/05/27 02:26:01 INFO StandaloneSchedulerBackend: Granted executor ID app-20190527062557-0000/2 on hostPort 172.18.0.3:44351 with 1 core(s), 1024.0 MB RAM
19/05/27 02:26:01 INFO StandaloneAppClient$ClientEndpoint: Executor updated: app-20190527062557-0000/2 is now RUNNING
19/05/27 02:26:03 INFO StandaloneAppClient$ClientEndpoint: Executor updated: app-20190527062557-0000/2 is now EXITED (Command exited with code 1)

израбочий контейнер Я вижу, что задание не выполняется со следующим исключением

Caused by: java.io.IOException: Failed to connect to DESKTOP-PCH5L6D:53511
        at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:245)
        at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:187)
        at org.apache.spark.rpc.netty.NettyRpcEnv.createClient(NettyRpcEnv.scala:198)
        at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:194)
        at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:190)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.UnknownHostException: DESKTOP-PCH5L6D
        at java.net.InetAddress.getAllByName0(InetAddress.java:1281)
        at java.net.InetAddress.getAllByName(InetAddress.java:1193)
        at java.net.InetAddress.getAllByName(InetAddress.java:1127)
        at java.net.InetAddress.getByName(InetAddress.java:1077)

1 Ответ

0 голосов
/ 27 мая 2019

добавлен дополнительный параметр extra_hosts для устранения проблемы, как показано ниже

version: "3.7"
services:
  spark-master:
    image: cmishr4/spark:latest
    container_name: spark-master
    hostname: spark-master
    ports:
      - "8080:8080"
      - "7077:7077"
    volumes:
      - ./../share:/share

    privileged: true
    networks:
      - spark-network
    environment:
      - "SPARK_MASTER_PORT=7077"
      - "SPARK_MASTER_WEBUI_PORT=8080"
    command: "sh start-master.sh"
  spark-worker:
    image: cmishr4/spark:latest
    depends_on:
      - spark-master
    ports:
      - 8080
    volumes:
      - ./../share:/share

    privileged: true
    networks:
      - spark-network
    extra_hosts: 
        - "DESKTOP-PCH5L6D:10.0.75.1"
    environment:
      - "SPARK_MASTER=spark://spark-master:7077"
      - "SPARK_WORKER_WEBUI_PORT=8080"
    command: "sh start-worker.sh"
networks:
  spark-network:
    driver: bridge
    ipam:
      driver: default
Добро пожаловать на сайт PullRequest, где вы можете задавать вопросы и получать ответы от других членов сообщества.
...