Logstash & Kafka - java.io.EOFException: null.Узел -1 отключен - PullRequest
0 голосов
/ 24 января 2019

Я пытаюсь настроить стек ELK с одним из моих фильтров Logstash на моем локальном компьютере.

У меня есть входной файл, который входит в очередь kafka, которая анализируется моим фильтром.Который выводит наasticsearch.Когда я запускаю свой test.sh, который запускает фильтр logstash для входного файла, это приводит к ошибкам в logstash --debug

Я не уверен, что это может быть за ошибка, все мои настройки - localhost и defaultпорты.Любое руководство, как эта ошибка не говорит мне много.

[2019-01-23T15:09:27,263][DEBUG][org.apache.kafka.clients.NetworkClient] [Consumer clientId=logstash-0, groupId=test-retry] Initialize connection to node localhost:2181 (id: -1 rack: null) for sending metadata request
[2019-01-23T15:09:27,264][DEBUG][org.apache.kafka.clients.NetworkClient] [Consumer clientId=logstash-0, groupId=test-retry] Initiating connection to node localhost:2181 (id: -1 rack: null)
[2019-01-23T15:09:27,265][DEBUG][org.apache.kafka.common.network.Selector] [Consumer clientId=logstash-0, groupId=test-retry] Created socket with SO_RCVBUF = 342972, SO_SNDBUF = 146988, SO_TIMEOUT = 0 to node -1
[2019-01-23T15:09:27,265][DEBUG][org.apache.kafka.clients.NetworkClient] [Consumer clientId=logstash-0, groupId=test-retry] Completed connection to node -1. Fetching API versions.
[2019-01-23T15:09:27,265][DEBUG][org.apache.kafka.clients.NetworkClient] [Consumer clientId=logstash-0, groupId=test-retry] Initiating API versions fetch from node -1.
[2019-01-23T15:09:27,266][DEBUG][org.apache.kafka.common.network.Selector] [Consumer clientId=logstash-0, groupId=test-retry] Connection with localhost/127.0.0.1 disconnected
java.io.EOFException: null
    at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:96) ~[kafka-clients-2.0.1.jar:?]
    at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:335) ~[kafka-clients-2.0.1.jar:?]
    at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:296) ~[kafka-clients-2.0.1.jar:?]
    at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:562) ~[kafka-clients-2.0.1.jar:?]
    at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:498) [kafka-clients-2.0.1.jar:?]
    at org.apache.kafka.common.network.Selector.poll(Selector.java:427) [kafka-clients-2.0.1.jar:?]
    at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:510) [kafka-clients-2.0.1.jar:?]
    at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:271) [kafka-clients-2.0.1.jar:?]
    at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:242) [kafka-clients-2.0.1.jar:?]
    at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:233) [kafka-clients-2.0.1.jar:?]
    at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.awaitMetadataUpdate(ConsumerNetworkClient.java:161) [kafka-clients-2.0.1.jar:?]
    at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:243) [kafka-clients-2.0.1.jar:?]
    at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:314) [kafka-clients-2.0.1.jar:?]
    at org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1218) [kafka-clients-2.0.1.jar:?]
    at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1181) [kafka-clients-2.0.1.jar:?]
    at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1115) [kafka-clients-2.0.1.jar:?]
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_172]
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_172]
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_172]
    at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_172]
    at org.jruby.javasupport.JavaMethod.invokeDirectWithExceptionHandling(JavaMethod.java:423) [jruby-complete-9.1.13.0.jar:?]
    at org.jruby.javasupport.JavaMethod.invokeDirect(JavaMethod.java:290) [jruby-complete-9.1.13.0.jar:?]
    at org.jruby.java.invokers.InstanceMethodInvoker.call(InstanceMethodInvoker.java:28) [jruby-complete-9.1.13.0.jar:?]
    at org.jruby.java.invokers.InstanceMethodInvoker.call(InstanceMethodInvoker.java:90) [jruby-complete-9.1.13.0.jar:?]
    at org.jruby.ir.targets.InvokeSite.invoke(InvokeSite.java:145) [jruby-complete-9.1.13.0.jar:?]
    at usr.local.Cellar.logstash.$6_dot_5_dot_4.libexec.vendor.bundle.jruby.$2_dot_3_dot_0.gems.logstash_minus_input_minus_kafka_minus_8_dot_2_dot_1.lib.logstash.inputs.kafka.RUBY$block$thread_runner$1(/usr/local/Cellar/logstash/6.5.4/libexec/vendor/bundle/jruby/2.3.0/gems/logstash-input-kafka-8.2.1/lib/logstash/inputs/kafka.rb:253) [jruby-complete-9.1.13.0.jar:?]
    at org.jruby.runtime.CompiledIRBlockBody.callDirect(CompiledIRBlockBody.java:145) [jruby-complete-9.1.13.0.jar:?]
    at org.jruby.runtime.IRBlockBody.call(IRBlockBody.java:71) [jruby-complete-9.1.13.0.jar:?]
    at org.jruby.runtime.Block.call(Block.java:124) [jruby-complete-9.1.13.0.jar:?]
    at org.jruby.RubyProc.call(RubyProc.java:289) [jruby-complete-9.1.13.0.jar:?]
    at org.jruby.RubyProc.call(RubyProc.java:246) [jruby-complete-9.1.13.0.jar:?]
    at org.jruby.internal.runtime.RubyRunnable.run(RubyRunnable.java:104) [jruby-complete-9.1.13.0.jar:?]
    at java.lang.Thread.run(Thread.java:748) [?:1.8.0_172]
[2019-01-23T15:09:27,267][DEBUG][org.apache.kafka.clients.NetworkClient] [Consumer clientId=logstash-0, groupId=test-retry] Node -1 disconnected.
[2019-01-23T15:09:27,267][DEBUG][org.apache.kafka.clients.NetworkClient] [Consumer clientId=logstash-0, groupId=test-retry] Give up sending metadata request since no node is available
[2019-01-23T15:09:27,318][DEBUG][org.apache.kafka.clients.NetworkClient] [Consumer clientId=logstash-0, groupId=test-retry] Give up sending metadata request since no node is available
[2019-01-23T15:09:27,373][DEBUG][org.apache.kafka.clients.NetworkClient] [Consumer clientId=logstash-0, groupId=test-retry] Give up sending metadata request since no node is available
[2019-01-23T15:09:27,424][DEBUG][org.apache.kafka.clients.NetworkClient] [Consumer clientId=logstash-0, groupId=test-retry] Give up sending metadata request since no node is available
[2019-01-23T15:09:27,474][DEBUG][org.apache.kafka.clients.NetworkClient] [Consumer clientId=logstash-0, groupId=test-retry] Give up sending metadata request since no node is available
[2019-01-23T15:09:27,529][DEBUG][org.apache.kafka.clients.NetworkClient] [Consumer clientId=logstash-0, groupId=test-retry] Give up sending metadata request since no node is available
[2019-01-23T15:09:27,579][DEBUG][org.apache.kafka.clients.NetworkClient] [Consumer clientId=logstash-0, groupId=test-retry] Give up sending metadata request since no node is available
[2019-01-23T15:09:27,633][DEBUG][org.apache.kafka.clients.NetworkClient] [Consumer clientId=logstash-0, groupId=test-retry] Give up sending metadata request since no node is available
[2019-01-23T15:09:27,700][DEBUG][org.apache.kafka.clients.NetworkClient] [Consumer clientId=logstash-0, groupId=test-retry] Give up sending metadata request since no node is available
[2019-01-23T15:09:27,753][DEBUG][org.apache.kafka.clients.NetworkClient] [Consumer clientId=logstash-0, groupId=test-retry] Give up sending metadata request since no node is available
[2019-01-23T15:09:27,806][DEBUG][org.apache.kafka.clients.NetworkClient] [Consumer clientId=logstash-0, groupId=test-retry] Give up sending metadata request since no node is available
[2019-01-23T15:09:27,858][DEBUG][org.apache.kafka.clients.NetworkClient] [Consumer clientId=logstash-0, groupId=test-retry] Give up sending metadata request since no node is available
[2019-01-23T15:09:27,913][DEBUG][org.apache.kafka.clients.NetworkClient] [Consumer clientId=logstash-0, groupId=test-retry] Give up sending metadata request since no node is available
[2019-01-23T15:09:27,966][DEBUG][org.apache.kafka.clients.NetworkClient] [Consumer clientId=logstash-0, groupId=test-retry] Give up sending metadata request since no node is available
[2019-01-23T15:09:28,020][DEBUG][org.apache.kafka.clients.NetworkClient] [Consumer clientId=logstash-0, groupId=test-retry] Give up sending metadata request since no node is available
[2019-01-23T15:09:28,073][DEBUG][org.apache.kafka.clients.NetworkClient] [Consumer clientId=logstash-0, groupId=test-retry] Give up sending metadata request since no node is available
[2019-01-23T15:09:28,128][DEBUG][org.apache.kafka.clients.NetworkClient] [Consumer clientId=logstash-0, groupId=test-retry] Give up sending metadata request since no node is available
[2019-01-23T15:09:28,183][DEBUG][org.apache.kafka.clients.NetworkClient] [Consumer clientId=logstash-0, groupId=test-retry] Give up sending metadata request since no node is available
[2019-01-23T15:09:28,234][DEBUG][org.apache.kafka.clients.NetworkClient] [Consumer clientId=logstash-0, groupId=test-retry] Give up sending metadata request since no node is available
[2019-01-23T15:09:28,288][DEBUG][org.apache.kafka.clients.NetworkClient] [Consumer clientId=logstash-0, groupId=test-retry] Give up sending metadata request since no node is available
[2019-01-23T15:09:28,340][DEBUG][org.apache.kafka.clients.NetworkClient] [Consumer clientId=logstash-0, groupId=test-retry] Initialize connection to node localhost:2181 (id: -1 rack: null) for sending metadata request

1 Ответ

0 голосов
/ 24 января 2019

Вы устанавливаете logstash для подключения к Zookeeper, а не Kafka.

Инициализировать соединение с узлом localhost: 2181

Убедитесь, что сервер начальной загрузки localhost:9092, в вашем случае

...