Logstash дает Elasticsearch Недостижимая ошибка - PullRequest
0 голосов
/ 02 ноября 2018

Я пытаюсь подключить Logstash к Elasticsearch, но не могу заставить его работать. Мой Elasticsearch отлично работает на localhost: 9200, и я могу его свернуть.

Мой logstash.config выглядит так:

http.host: "0.0.0.0"
xpack.monitoring.elasticsearch.url: http://localhost:9200

Мой logstash-sample.conf:

# Sample Logstash configuration for creating a simple
# Beats -> Logstash -> Elasticsearch pipeline.

input {
  beats {
    port => 5044
  }
}

output {
  elasticsearch {
    hosts => ["http://localhost:9200"]
    index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
    #user => "elastic"
    #password => "changeme"
  }
}

Я использую для запуска контейнера.

docker run                         --name logstash             -p 5044:5044                                      -e "discovery.type=single-node"  docker.elastic.co/logstash/logstash:6.4.2

Однако я получаю следующий журнал с ошибкой:

Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
[2018-11-02T15:30:44,622][WARN ][logstash.monitoringextension.pipelineregisterhook] xpack.monitoring.enabled has not been defined, but found elasticsearch configuration. Please explicitly set `xpack.monitoring.enabled: true` in logstash.yml
[2018-11-02T15:30:44,814][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"6.4.2"}
[2018-11-02T15:30:48,350][WARN ][logstash.outputs.elasticsearch] You are using a deprecated config setting "document_type" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Document types are being deprecated in Elasticsearch 6.0, and removed entirely in 7.0. You should avoid this feature If you have any questions about this, please visit the #logstash channel on freenode irc. {:name=>"document_type", :plugin=><LogStash::Outputs::ElasticSearch bulk_path=>"/_xpack/monitoring/_bulk?system_id=logstash&system_api_version=2&interval=1s", hosts=>[http://localhost:9200], sniffing=>false, manage_template=>false, id=>"2196aa69258f6adaaf9506d8988cc76ab153e658434074dcf2e424e0aca0d381", document_type=>"%{[@metadata][document_type]}", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_1afa70a3-eaef-4cf5-9762-0d759d720d1c", enable_metric=>true, charset=>"UTF-8">, workers=>1, template_name=>"logstash", template_overwrite=>false, doc_as_upsert=>false, script_type=>"inline", script_lang=>"painless", script_var_name=>"event", scripted_upsert=>false, retry_initial_interval=>2, retry_max_interval=>64, retry_on_conflict=>1, action=>"index", ssl_certificate_verification=>true, sniffing_delay=>5, timeout=>60, pool_max=>1000, pool_max_per_route=>100, resurrect_delay=>5, validate_after_inactivity=>10000, http_compression=>false>}
[2018-11-02T15:30:48,710][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>50}
[2018-11-02T15:30:50,318][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2018-11-02T15:30:51,394][INFO ][logstash.inputs.beats    ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
[2018-11-02T15:30:51,462][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x688d6788 run>"}
[2018-11-02T15:30:51,698][INFO ][org.logstash.beats.Server] Starting server on port: 5044
[2018-11-02T15:30:51,972][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2018-11-02T15:30:51,982][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
[2018-11-02T15:30:52,194][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
[2018-11-02T15:30:52,218][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://localhost:9200"]}
[2018-11-02T15:30:52,322][INFO ][logstash.licensechecker.licensereader] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2018-11-02T15:30:52,322][INFO ][logstash.licensechecker.licensereader] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
[2018-11-02T15:30:52,332][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
[2018-11-02T15:30:52,378][WARN ][logstash.licensechecker.licensereader] Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused) {:url=>http://localhost:9200/, :error_message=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"}
[2018-11-02T15:30:52,509][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError", :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.2.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:293:in `perform_request_to_url'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.2.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:278:in `block in perform_request'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.2.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:373:in `with_connection'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.2.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:277:in `perform_request'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.2.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:285:in `block in get'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.2.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:162:in `get'", "/usr/share/logstash/x-pack/lib/license_checker/license_reader.rb:28:in `fetch_xpack_info'", "/usr/share/logstash/x-pack/lib/license_checker/license_manager.rb:40:in `fetch_xpack_info'", "/usr/share/logstash/x-pack/lib/license_checker/license_manager.rb:27:in `initialize'", "/usr/share/logstash/x-pack/lib/license_checker/licensed.rb:37:in `setup_license_checker'", "/usr/share/logstash/x-pack/lib/monitoring/inputs/metrics.rb:56:in `register'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:242:in `register_plugin'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:253:in `block in register_plugins'", "org/jruby/RubyArray.java:1734:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:253:in `register_plugins'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:396:in `start_inputs'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:294:in `start_workers'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:200:in `run'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:160:in `block in start'"]}
[2018-11-02T15:30:52,776][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>".monitoring-logstash", :thread=>"#<Thread:0x4bac2ad3 sleep>"}
[2018-11-02T15:30:52,842][INFO ][logstash.agent           ] Pipelines running {:count=>2, :running_pipelines=>[:main, :".monitoring-logstash"], :non_running_pipelines=>[]}
[2018-11-02T15:30:52,865][ERROR][logstash.inputs.metrics  ] X-Pack is installed on Logstash but not on Elasticsearch. Please install X-Pack on Elasticsearch to use the monitoring feature. Other features may be available.
[2018-11-02T15:30:53,119][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2018-11-02T15:30:57,213][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
[2018-11-02T15:30:57,222][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
[2018-11-02T15:30:57,347][INFO ][logstash.licensechecker.licensereader] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}

И это продолжается. Я был бы очень признателен за вашу помощь и был бы рад. Спасибо.

Ответы [ 2 ]

0 голосов
/ 02 ноября 2018

Как заметил комментарий @Val, localhost INSIDE для контейнера это контейнер, а не ваша машина HOST.

Если с Docker Compose все в порядке, вы можете попробовать ' sebp / elk ' Образ Docker, который содержит ElasticSearch, LogStash и Kibana. И используя этот пример Compose File

docker-compose.yml :

version: "2.4"
services:
  elk:
    image: sebp/elk
    ports:
      - "5601:5601"
      - "9200:9200"
      - "5044:5044"
  1. Установить Docker-Compose
  2. Сохранить образец 'docker-compose.yml' в папке
  3. В той же папке запустите: $ docker-compose up

Но если вы хотите изменить конфигурацию, вы должны:

  1. Создайте свой собственный файл Docker с этим изображением в качестве базы и перезапишите нужный файл conf
  2. Пользовательский Docker Составьте тома, чтобы перезаписать отдельные файлы, отредактировав docker-compose.yml вместо создания нового образа Docker с помощью Dockerfile

Альтернативы

  1. Используйте свой частный локальный IP-адрес локальной сети в конфигурации контейнера вместо localhost, чтобы контейнер мог до него добраться
  2. Используйте обе системы в контейнерах, и при «запуске докера» используйте одну и ту же сеть, настройте имена хостов и используйте эти имена хостов в конфигурации
  3. Используйте Docker Compose с одним сервисом для каждой системы (Elastic и LogStash) и используйте имя сервиса в конфигурации (потому что Docker Compose будет использовать имя сервиса в качестве имени хоста по умолчанию)
  4. Используйте исходное предложение Docker Compose с одним сервисом с изображением Docker sebp / elk, чтобы все было в одном контейнере, и вы можете сохранить локальный conf

Просто помните, что у каждой альтернативы может быть свой подход к настройке конфигурации.

0 голосов
/ 02 ноября 2018

Я думаю, что ключ к проблеме - это сообщение об ошибке:

[2018-11-02T15:30:52,865][ERROR][logstash.inputs.metrics  ] X-Pack is installed on Logstash but not on Elasticsearch. Please install X-Pack on Elasticsearch to use the monitoring feature. Other features may be available.

Похоже, вы установили X-Pack на Logstash, но не на свои узлы Elasticsearch. Попробуйте и снова запустите Logstash.

...