Успешно работает с кустом при искре на клиенте пряжи, но когда я меняю режим развертывания с кластером пряжи в hive-site.xml ... error ...
Окружающая среда Sversion:
hadoop 2.6.1
искровой 2.0.0-без улья
улей-2.3.4
1 использование Hive-Cli
hive> use ids;
OK
Time taken: 4.911 seconds
hive> select count(*) as r1 from admonitor_detail where dt = 20190626;
Query ID = dmp_20190628092928_5f90d598-3753-4a28-b2cc-1605fa486fcb
Total jobs = 1
Launching Job 1 out of 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapreduce.job.reduces=<number>
Failed to execute spark task, with exception 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create spark client.)'
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.spark.SparkTask. Failed to create spark client.
hive> quit;
[dmp@nb-AudiX-mysql01 ~]$ /home/dmp/hadoop-2.6.1/bin/yarn logs -applicationId application_1559569337761_579077
19/06/28 09:36:00 INFO client.RMProxy: Connecting to ResourceManager at mainRM.master.adh/10.10.10.235:8032
19/06/28 09:36:00 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2 ошибка в hive'log
019-06-28T11:03:39,832 INFO [stderr-redir-1] client.SparkClientImpl: 19/06/28 11:03:39 INFO yarn.Client: Submitting application application_1559569337761_580548 to ResourceManager
2019-06-28T11:03:39,889 INFO [stderr-redir-1] client.SparkClientImpl: 19/06/28 11:03:39 INFO impl.YarnClientImpl: Submitted application application_1559569337761_580548
2019-06-28T11:03:39,892 INFO [stderr-redir-1] client.SparkClientImpl: 19/06/28 11:03:39 INFO yarn.Client: Application report for application_1559569337761_580548 (state: ACCEPTED)
2019-06-28T11:03:39,897 INFO [stderr-redir-1] client.SparkClientImpl: 19/06/28 11:03:39 INFO yarn.Client:
2019-06-28T11:03:39,897 INFO [stderr-redir-1] client.SparkClientImpl: client token: N/A
2019-06-28T11:03:39,897 INFO [stderr-redir-1] client.SparkClientImpl: diagnostics: N/A
2019-06-28T11:03:39,897 INFO [stderr-redir-1] client.SparkClientImpl: ApplicationMaster host: N/A
2019-06-28T11:03:39,897 INFO [stderr-redir-1] client.SparkClientImpl: ApplicationMaster RPC port: -1
2019-06-28T11:03:39,897 INFO [stderr-redir-1] client.SparkClientImpl: queue: root.dmp
2019-06-28T11:03:39,898 INFO [stderr-redir-1] client.SparkClientImpl: start time: 1561691019479
2019-06-28T11:03:39,898 INFO [stderr-redir-1] client.SparkClientImpl: final status: UNDEFINED
2019-06-28T11:03:39,898 INFO [stderr-redir-1] client.SparkClientImpl: tracking URL: http://mainRM.master.adh:8088/proxy/application_1559569337761_580548/
2019-06-28T11:03:39,898 INFO [stderr-redir-1] client.SparkClientImpl: user: dmp_ids
2019-06-28T11:03:39,902 INFO [stderr-redir-1] client.SparkClientImpl: 19/06/28 11:03:39 INFO util.ShutdownHookManager: Shutdown hook called
2019-06-28T11:03:39,904 INFO [stderr-redir-1] client.SparkClientImpl: 19/06/28 11:03:39 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-cf0ed855-84f1-44ea-90f6-836901c6a77b
2019-06-28T11:04:26,053 ERROR [c1c69372-c25e-4d89-a7b0-0abbe8db116c main] client.SparkClientImpl: Timed out waiting for client to connect.
Possible reasons include network issues, errors in remote driver or the cluster has no available resources, etc.
Please check YARN or Spark driver's logs for further information.
java.util.concurrent.ExecutionException: java.util.concurrent.TimeoutException: Timed out waiting for client connection.
at io.netty.util.concurrent.AbstractFuture.get(AbstractFuture.java:41) ~[netty-all-4.0.52.Final.jar:4.0.52.Final]
at org.apache.hive.spark.client.SparkClientImpl.<init>(SparkClientImpl.java:109) ~[hive-exec-2.3.4.jar:2.3.4]
at org.apache.hive.spark.client.SparkClientFactory.createClient(SparkClientFactory.java:80) ~[hive-exec-2.3.4.jar:2.3.4]
at org.apache.hadoop.hive.ql.exec.spark.RemoteHiveSparkClient.createRemoteClient(RemoteHiveSparkClient.java:101) ~[hive-exec-2.3.4.jar:2.3.4]
at org.apache.hadoop.hive.ql.exec.spark.RemoteHiveSparkClient.<init>(RemoteHiveSparkClient.java:97) ~[hive-exec-2.3.4.jar:2.3.4]
at org.apache.hadoop.hive.ql.exec.spark.HiveSparkClientFactory.createHiveSparkClient(HiveSparkClientFactory.java:73) ~[hive-exec-2.3.4.jar:2.3.4]
at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.open(SparkSessionImpl.java:62) ~[hive-exec-2.3.4.jar:2.3.4]
at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionManagerImpl.getSession(SparkSessionManagerImpl.java:115) ~[hive-exec-2.3.4.jar:2.3.4]
at org.apache.hadoop.hive.ql.exec.spark.SparkUtilities.getSparkSession(SparkUtilities.java:126) ~[hive-exec-2.3.4.jar:2.3.4]
at org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:103) ~[hive-exec-2.3.4.jar:2.3.4]
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199) ~[hive-exec-2.3.4.jar:2.3.4]
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) ~[hive-exec-2.3.4.jar:2.3.4]
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2183) ~[hive-exec-2.3.4.jar:2.3.4]
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1839) ~[hive-exec-2.3.4.jar:2.3.4]
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1526) ~[hive-exec-2.3.4.jar:2.3.4]
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1237) ~[hive-exec-2.3.4.jar:2.3.4]
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1227) ~[hive-exec-2.3.4.jar:2.3.4]
at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233) ~[hive-cli-2.3.4.jar:2.3.4]
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:184) ~[hive-cli-2.3.4.jar:2.3.4]
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403) ~[hive-cli-2.3.4.jar:2.3.4]
at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:821) ~[hive-cli-2.3.4.jar:2.3.4]
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759) ~[hive-cli-2.3.4.jar:2.3.4]
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:686) ~[hive-cli-2.3.4.jar:2.3.4]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_191]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_191]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_191]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_191]
at org.apache.hadoop.util.RunJar.run(RunJar.java:221) ~[hadoop-common-2.6.0-cdh5.4.7.jar:?]
at org.apache.hadoop.util.RunJar.main(RunJar.java:136) ~[hadoop-common-2.6.0-cdh5.4.7.jar:?]
Caused by: java.util.concurrent.TimeoutException: Timed out waiting for client connection.
3 ошибка в журнале искр
19/06/28 11:03:50 INFO yarn.ApplicationMaster: ApplicationAttemptId: appattempt_1559569337761_580548_000001
19/06/28 11:03:50 INFO spark.SecurityManager: Changing view acls to: yarn,dmp_ids
19/06/28 11:03:50 INFO spark.SecurityManager: Changing modify acls to: yarn,dmp_ids
19/06/28 11:03:50 INFO spark.SecurityManager: Changing view acls groups to:
19/06/28 11:03:50 INFO spark.SecurityManager: Changing modify acls groups to:
19/06/28 11:03:50 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(yarn, dmp_ids); groups with view permissions: Set(); users with modify permissions: Set(yarn, dmp_ids); groups with modify permissions: Set()
19/06/28 11:03:50 INFO yarn.ApplicationMaster: Starting the user application in a separate Thread
19/06/28 11:03:50 INFO yarn.ApplicationMaster: Waiting for spark context initialization
19/06/28 11:03:50 INFO yarn.ApplicationMaster: Waiting for spark context initialization ...
19/06/28 11:03:50 INFO client.RemoteDriver: Connecting to: localhost:59856
19/06/28 11:03:50 INFO conf.HiveConf: Found configuration file null
19/06/28 11:03:50 ERROR yarn.ApplicationMaster: User class threw exception: java.util.concurrent.ExecutionException: java.net.ConnectException: Connection refused: localhost/127.0.0.1:59856
java.util.concurrent.ExecutionException: java.net.ConnectException: Connection refused: localhost/127.0.0.1:59856
at io.netty.util.concurrent.AbstractFuture.get(AbstractFuture.java:37)
at org.apache.hive.spark.client.RemoteDriver.<init>(RemoteDriver.java:145)
at org.apache.hive.spark.client.RemoteDriver.main(RemoteDriver.java:516)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:627)
Caused by: java.net.ConnectException: Connection refused: localhost/127.0.0.1:59856
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:224)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:289)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:528)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at java.lang.Thread.run(Thread.java:745)
19/06/28 11:03:50 INFO yarn.ApplicationMaster: Final app status: FAILED, exitCode: 15, (reason: User class threw exception: java.util.concurrent.ExecutionException: java.net.ConnectException: Connection refused: localhost/127.0.0.1:59856)
19/06/28 11:04:00 ERROR yarn.ApplicationMaster: SparkContext did not initialize after waiting for 100000 ms. Please check earlier log output for errors. Failing the application.
19/06/28 11:04:00 INFO yarn.ApplicationMaster: Unregistering ApplicationMaster with FAILED (diag message: User class threw exception: java.util.concurrent.ExecutionException: java.net.ConnectException: Connection refused: localhost/127.0.0.1:59856)
19/06/28 11:04:00 INFO yarn.ApplicationMaster: Deleting staging directory hdfs://adhnamenode/user/dmp_ids/.sparkStaging/application_1559569337761_580548
19/06/28 11:04:03 INFO util.ShutdownHookManager: Shutdown hook called
почему client.RemoteDriver: Соединяется с: localhost: 59856 ???
почему драйвер подключается к localhost? почему не узлы моего кластера?
пожалуйста, помогите мне .....