записывать данные в cassandra, обращенную к BusyPoolException - PullRequest
0 голосов
/ 23 марта 2019

Я пытаюсь записать фрейм данных на Кассандру, используя эти строки кода, когда-нибудь смог записать в таблицу, но неожиданно возникла ошибка

alertdf
.write.format("org.apache.spark.sql.cassandra")
                 .options(Map("keyspace" -> "dummy", "table" -> "dummytable"))
                  .mode(SaveMode.Append)
                  .save()

Я получаю сообщение об ошибке ниже, не могу выяснить, что не так

  ERROR QueryExecutor: Failed to execute: com.datastax.spark.connector.writer.RichBoundStatement@7dba59e2
        com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: **.**.**.**/**.**.**.**:9042 (com.datastax.driver.core.exceptions.BusyPoolException: [**.**.**.**/**.**.**.**] Pool is busy (no available connection and the queue has reached its max size 256)))
            at com.datastax.driver.core.RequestHandler.reportNoMoreHosts(RequestHandler.java:211)
            at com.datastax.driver.core.RequestHandler.access$1000(RequestHandler.java:46)
            at com.datastax.driver.core.RequestHandler$SpeculativeExecution.findNextHostAndQuery(RequestHandler.java:275)
            at com.datastax.driver.core.RequestHandler$SpeculativeExecution$1.onFailure(RequestHandler.java:338)
            at shade.com.datastax.spark.connector.google.common.util.concurrent.Futures$6.run(Futures.java:1310)
            at shade.com.datastax.spark.connector.google.common.util.concurrent.MoreExecutors$DirectExecutor.execute(MoreExecutors.java:457)
            at shade.com.datastax.spark.connector.google.common.util.concurrent.Futures$ImmediateFuture.addListener(Futures.java:106)
            at shade.com.datastax.spark.connector.google.common.util.concurrent.Futures.addCallback(Futures.java:1322)
            at shade.com.datastax.spark.connector.google.common.util.concurrent.Futures.addCallback(Futures.java:1258)
            at com.datastax.driver.core.RequestHandler$SpeculativeExecution.query(RequestHandler.java:297)
            at com.datastax.driver.core.RequestHandler$SpeculativeExecution.findNextHostAndQuery(RequestHandler.java:272)
            at com.datastax.driver.core.RequestHandler.startNewExecution(RequestHandler.java:115)
            at com.datastax.driver.core.RequestHandler.sendRequest(RequestHandler.java:95)
            at com.datastax.driver.core.SessionManager.executeAsync(SessionManager.java:132)
            at sun.reflect.GeneratedMethodAccessor34.invoke(Unknown Source)
            at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
            at java.lang.reflect.Method.invoke(Method.java:498)
            at com.datastax.spark.connector.cql.SessionProxy.invoke(SessionProxy.scala:40)
            at com.sun.proxy.$Proxy14.executeAsync(Unknown Source)
            at sun.reflect.GeneratedMethodAccessor34.invoke(Unknown Source)
            at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
            at java.lang.reflect.Method.invoke(Method.java:498)
            at com.datastax.spark.connector.cql.SessionProxy.invoke(SessionProxy.scala:40)
            at com.sun.proxy.$Proxy15.executeAsync(Unknown Source)
            at com.datastax.spark.connector.writer.QueryExecutor$$anonfun$$lessinit$greater$1.apply(QueryExecutor.scala:11)
            at com.datastax.spark.connector.writer.QueryExecutor$$anonfun$$lessinit$greater$1.apply(QueryExecutor.scala:11)
            at com.datastax.spark.connector.writer.AsyncExecutor.executeAsync(AsyncExecutor.scala:31)
            at com.datastax.spark.connector.writer.TableWriter$$anonfun$writeInternal$1$$anonfun$apply$2.apply(TableWriter.scala:199)
            at com.datastax.spark.connector.writer.TableWriter$$anonfun$writeInternal$1$$anonfun$apply$2.apply(TableWriter.scala:198)
            at scala.collection.Iterator$class.foreach(Iterator.scala:893)
            at com.datastax.spark.connector.writer.GroupingBatchBuilder.foreach(GroupingBatchBuilder.scala:31)
            at com.datastax.spark.connector.writer.TableWriter$$anonfun$writeInternal$1.apply(TableWriter.scala:198)
            at com.datastax.spark.connector.writer.TableWriter$$anonfun$writeInternal$1.apply(TableWriter.scala:175)
            at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$withSessionDo$1.apply(CassandraConnector.scala:112)
            at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$withSessionDo$1.apply(CassandraConnector.scala:111)
            at com.datastax.spark.connector.cql.CassandraConnector.closeResourceAfterUse(CassandraConnector.scala:145)
            at com.datastax.spark.connector.cql.CassandraConnector.withSessionDo(CassandraConnector.scala:111)
            at com.datastax.spark.connector.writer.TableWriter.writeInternal(TableWriter.scala:175)
            at com.datastax.spark.connector.writer.TableWriter.insert(TableWriter.scala:162)
            at com.datastax.spark.connector.writer.TableWriter.write(TableWriter.scala:149)
            at com.datastax.spark.connector.RDDFunctions$$anonfun$saveToCassandra$1.apply(RDDFunctions.scala:36)
            at com.datastax.spark.connector.RDDFunctions$$anonfun$saveToCassandra$1.apply(RDDFunctions.scala:36)
            at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
            at org.apache.spark.scheduler.Task.run(Task.scala:86)
            at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
            at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
            at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
            at java.lang.Thread.run(Thread.java:748)

Кто-нибудь может мне помочь с этим вопросом?

1 Ответ

0 голосов
/ 24 марта 2019

Похоже, что ваши серверы перегружены и не обрабатывают ваши запросы вовремя.Я рекомендую попробовать настроить параметры конфигурации, связанные с записью , например, output.concurrent.writes, output.throughput_mb_per_sec и другие, но я бы начал с первых 2.

...