У меня есть 2 кластера k8s, размещенных на отдельных хостах / виртуальных машинах. Первый кластер содержит приложения микро-сервисов. Хосты второго кластера elasti c search и kibana.
Микро-сервисы для каждого кластера настроены на отправку журналов в elasti c search, размещенный во втором кластере с помощью нижнего флага, переданного команде установки приложения helm: "-set global.elasticsearch.url = "http://example.com: 30001"
Во втором кластере k8s я установил elasti c -stack, используя следующую команду helm:
helm install --name elk **stable/elastic-stack** -f temp.yaml
temp.yaml
elasticsearch:
enabled: true
client:
serviceType: NodePort
httpNodePort: 30001
kibana:
enabled: true
resources:
requests:
cpu: "100m"
memory: "512M"
service:
type: NodePort
port: 5601
targetPort: 5601
protocol: TCP
nodePort: 30002
env:
ELASTICSEARCH_HOSTS: http://{{ .Release.Name }}-elasticsearch-client:9200
kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
elk-elasticsearch-client NodePort 10.233.55.130 <none> 9200:30001/TCP 23s
elk-elasticsearch-discovery ClusterIP None <none> 9300/TCP 23s
elk-kibana NodePort 10.233.53.171 <none> 443:30002/TCP 23s
kubernetes ClusterIP 10.233.0.1 <none> 443/TCP 16h
Команда ниже показывает, что контейнер кибана может достигать / говорить в эластичный контейнер для поиска
kubectl exec -it elk-kibana-79698f574f-kkhvb /bin/bash
curl elk-elasticsearch-client:9200
{
"name" : "elk-elasticsearch-client-97d8dd99f-cl9x4",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "M04WseoxQty2lQ7qJmmlOw",
"version" : {
"number" : "6.8.2",
"build_flavor" : "oss",
"build_type" : "docker",
"build_hash" : "b506955",
"build_date" : "2019-07-24T15:24:41.545295Z",
"build_snapshot" : false,
"lucene_version" : "7.7.0",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
I go до http://example.com: 30002 , где я ожидаю см. раздел о поиске elasti c, в котором я вижу обнаруженные журналы или индексы, поступающие из кластера 1 с запущенными микросервисами приложений, но я этого не вижу. Я что-то упускаю? Кибана действительно видит ElasticSearch?
пожалуйста, нажмите здесь, чтобы посмотреть, как выглядит кибана * 1 036 *
Это то, что я вижу с точки зрения журналов для следующих контейнеров:
Журнал контейнера Kibana:
kubectl logs -f elk-kibana-79698f574f-kkhvb
{"type":"log","@timestamp":"2020-04-12T05:25:17Z","tags":["warning"],"pid":1,"kibanaVersion":"6.7.0","nodes":[{"version":"6.8.2","http":{"publish_address":"10.233.69.142:9200"},"ip":"10.233.69.142"},{"version":"6.8.2","http":{"publish_address":"10.233.69.119:9200"},"ip":"10.233.69.119"},{"version":"6.8.2","http":{"publish_address":"10.233.69.129:9200"},"ip":"10.233.69.129"},{"version":"6.8.2","http":{"publish_address":"10.233.69.110:9200"},"ip":"10.233.69.110"},{"version":"6.8.2","http":{"publish_address":"10.233.69.111:9200"},"ip":"10.233.69.111"},{"version":"6.8.2","http":{"publish_address":"10.233.69.123:9200"},"ip":"10.233.69.123"},{"version":"6.8.2","http":{"publish_address":"10.233.69.109:9200"},"ip":"10.233.69.109"}],"message":"You're running Kibana 6.7.0 with some different versions of Elasticsearch. Update Kibana or Elasticsearch to the same version to prevent compatibility issues: v6.8.2 @ 10.233.69.142:9200 (10.233.69.142), v6.8.2 @ 10.233.69.119:9200 (10.233.69.119), v6.8.2 @ 10.233.69.129:9200 (10.233.69.129), v6.8.2 @ 10.233.69.110:9200 (10.233.69.110), v6.8.2 @ 10.233.69.111:9200 (10.233.69.111), v6.8.2 @ 10.233.69.123:9200 (10.233.69.123), v6.8.2 @ 10.233.69.109:9200 (10.233.69.109)"}
{"type":"log","@timestamp":"2020-04-12T05:25:20Z","tags":["warning"],"pid":1,"kibanaVersion":"6.7.0","nodes":[{"version":"6.8.2","http":{"publish_address":"10.233.69.129:9200"},"ip":"10.233.69.129"},{"version":"6.8.2","http":{"publish_address":"10.233.69.123:9200"},"ip":"10.233.69.123"},{"version":"6.8.2","http":{"publish_address":"10.233.69.119:9200"},"ip":"10.233.69.119"},{"version":"6.8.2","http":{"publish_address":"10.233.69.109:9200"},"ip":"10.233.69.109"},{"version":"6.8.2","http":{"publish_address":"10.233.69.111:9200"},"ip":"10.233.69.111"},{"version":"6.8.2","http":{"publish_address":"10.233.69.110:9200"},"ip":"10.233.69.110"},{"version":"6.8.2","http":{"publish_address":"10.233.69.142:9200"},"ip":"10.233.69.142"}],"message":"You're running Kibana 6.7.0 with some different versions of Elasticsearch. Update Kibana or Elasticsearch to the same version to prevent compatibility issues: v6.8.2 @ 10.233.69.129:9200 (10.233.69.129), v6.8.2 @ 10.233.69.123:9200 (10.233.69.123), v6.8.2 @ 10.233.69.119:9200 (10.233.69.119), v6.8.2 @ 10.233.69.109:9200 (10.233.69.109), v6.8.2 @ 10.233.69.111:9200 (10.233.69.111), v6.8.2 @ 10.233.69.110:9200 (10.233.69.110), v6.8.2 @ 10.233.69.142:9200 (10.233.69.142)"}
{"type":"log","@timestamp":"2020-04-12T05:25:22Z","tags":["warning"],"pid":1,"kibanaVersion":"6.7.0","nodes":[{"version":"6.8.2","http":{"publish_address":"10.233.69.123:9200"},"ip":"10.233.69.123"},{"version":"6.8.2","http":{"publish_address":"10.233.69.142:9200"},"ip":"10.233.69.142"},{"version":"6.8.2","http":{"publish_address":"10.233.69.111:9200"},"ip":"10.233.69.111"},{"version":"6.8.2","http":{"publish_address":"10.233.69.110:9200"},"ip":"10.233.69.110"},{"version":"6.8.2","http":{"publish_address":"10.233.69.129:9200"},"ip":"10.233.69.129"},{"version":"6.8.2","http":{"publish_address":"10.233.69.119:9200"},"ip":"10.233.69.119"},{"version":"6.8.2","http":{"publish_address":"10.233.69.109:9200"},"ip":"10.233.69.109"}],"message":"You're running Kibana 6.7.0 with some different versions of Elasticsearch. Update Kibana or Elasticsearch to the same version to prevent compatibility issues: v6.8.2 @ 10.233.69.123:9200 (10.233.69.123), v6.8.2 @ 10.233.69.142:9200 (10.233.69.142), v6.8.2 @ 10.233.69.111:9200 (10.233.69.111), v6.8.2 @ 10.233.69.110:9200 (10.233.69.110), v6.8.2 @ 10.233.69.129:9200 (10.233.69.129), v6.8.2 @ 10.233.69.119:9200 (10.233.69.119), v6.8.2 @ 10.233.69.109:9200 (10.233.69.109)"}
{"type":"log","@timestamp":"2020-04-12T05:25:25Z","tags":["warning"],"pid":1,"kibanaVersion":"6.7.0","nodes":[{"version":"6.8.2","http":{"publish_address":"10.233.69.111:9200"},"ip":"10.233.69.111"},{"version":"6.8.2","http":{"publish_address":"10.233.69.110:9200"},"ip":"10.233.69.110"},{"version":"6.8.2","http":{"publish_address":"10.233.69.109:9200"},"ip":"10.233.69.109"},{"version":"6.8.2","http":{"publish_address":"10.233.69.119:9200"},"ip":"10.233.69.119"},{"version":"6.8.2","http":{"publish_address":"10.233.69.142:9200"},"ip":"10.233.69.142"},{"version":"6.8.2","http":{"publish_address":"10.233.69.129:9200"},"ip":"10.233.69.129"},{"version":"6.8.2","http":{"publish_address":"10.233.69.123:9200"},"ip":"10.233.69.123"}],"message":"You're running Kibana 6.7.0 with some different versions of Elasticsearch. Update Kibana or Elasticsearch to the same version to prevent compatibility issues: v6.8.2 @ 10.233.69.111:9200 (10.233.69.111), v6.8.2 @ 10.233.69.110:9200 (10.233.69.110), v6.8.2 @ 10.233.69.109:9200 (10.233.69.109), v6.8.2 @ 10.233.69.119:9200 (10.233.69.119), v6.8.2 @ 10.233.69.142:9200 (10.233.69.142), v6.8.2 @ 10.233.69.129:9200 (10.233.69.129), v6.8.2 @ 10.233.69.123:9200 (10.233.69.123)"}
Журналы клиентаasticsearch:
kubectl logs -f elk-elasticsearch-client-97d8dd99f-4sfgt
[2020-04-12T05:16:40,375][WARN ][o.e.d.z.ZenDiscovery ] [elk-elasticsearch-client-97d8dd99f-4sfgt] not enough master nodes discovered during pinging (found [[Candidate{node={elk-elasticsearch-master-0}{eKU_y64SRUCrtkjqCyYm-A}{_QV8vlJZRpe4AS-dFszgAQ}{10.233.69.110}{10.233.69.110:9300}, clusterStateVersion=-1}]], but needed [2]), pinging again
[2020-04-12T05:16:43,378][WARN ][o.e.d.z.ZenDiscovery ] [elk-elasticsearch-client-97d8dd99f-4sfgt] not enough master nodes discovered during pinging (found [[Candidate{node={elk-elasticsearch-master-0}{eKU_y64SRUCrtkjqCyYm-A}{_QV8vlJZRpe4AS-dFszgAQ}{10.233.69.110}{10.233.69.110:9300}, clusterStateVersion=-1}]], but needed [2]), pinging again
[2020-04-12T05:16:46,382][WARN ][o.e.d.z.ZenDiscovery ] [elk-elasticsearch-client-97d8dd99f-4sfgt] not enough master nodes discovered during pinging (found [[Candidate{node={elk-elasticsearch-master-0}{eKU_y64SRUCrtkjqCyYm-A}{_QV8vlJZRpe4AS-dFszgAQ}{10.233.69.110}{10.233.69.110:9300}, clusterStateVersion=-1}]], but needed [2]), pinging again
[2020-04-12T05:16:47,920][WARN ][r.suppressed ] [elk-elasticsearch-client-97d8dd99f-4sfgt] path: /_cluster/health, params: {}
org.elasticsearch.discovery.MasterNotDiscoveredException: null
at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$4.onTimeout(TransportMasterNodeAction.java:262) [elasticsearch-6.8.2.jar:6.8.2]
at org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:322) [elasticsearch-6.8.2.jar:6.8.2]
at org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onTimeout(ClusterStateObserver.java:249) [elasticsearch-6.8.2.jar:6.8.2]
at org.elasticsearch.cluster.service.ClusterApplierService$NotifyTimeout.run(ClusterApplierService.java:564) [elasticsearch-6.8.2.jar:6.8.2]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:681) [elasticsearch-6.8.2.jar:6.8.2]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
at java.lang.Thread.run(Thread.java:835) [?:?]
[2020-04-12T05:16:49,384][WARN ][o.e.d.z.ZenDiscovery ] [elk-elasticsearch-client-97d8dd99f-4sfgt] not enough master nodes discovered during pinging (found [[Candidate{node={elk-elasticsearch-master-0}{eKU_y64SRUCrtkjqCyYm-A}{_QV8vlJZRpe4AS-dFszgAQ}{10.233.69.110}{10.233.69.110:9300}, clusterStateVersion=-1}]], but needed [2]), pinging again
[2020-04-12T05:16:52,387][WARN ][o.e.d.z.ZenDiscovery ] [elk-elasticsearch-client-97d8dd99f-4sfgt] not enough master nodes discovered during pinging (found [[Candidate{node={elk-elasticsearch-master-0}{eKU_y64SRUCrtkjqCyYm-A}{_QV8vlJZRpe4AS-dFszgAQ}{10.233.69.110}{10.233.69.110:9300}, clusterStateVersion=-1}]], but needed [2]), pinging again
[2020-04-12T05:16:55,394][WARN ][o.e.d.z.ZenDiscovery ] [elk-elasticsearch-client-97d8dd99f-4sfgt] not enough master nodes discovered during pinging (found [[Candidate{node={elk-elasticsearch-master-0}{eKU_y64SRUCrtkjqCyYm-A}{_QV8vlJZRpe4AS-dFszgAQ}{10.233.69.110}{10.233.69.110:9300}, clusterStateVersion=-1}]], but needed [2]), pinging again
[2020-04-12T05:16:57,818][WARN ][r.suppressed ] [elk-elasticsearch-client-97d8dd99f-4sfgt] path: /_cluster/health, params: {}
org.elasticsearch.discovery.MasterNotDiscoveredException: null
at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$4.onTimeout(TransportMasterNodeAction.java:262) [elasticsearch-6.8.2.jar:6.8.2]
at org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:322) [elasticsearch-6.8.2.jar:6.8.2]
at org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onTimeout(ClusterStateObserver.java:249) [elasticsearch-6.8.2.jar:6.8.2]
at org.elasticsearch.cluster.service.ClusterApplierService$NotifyTimeout.run(ClusterApplierService.java:564) [elasticsearch-6.8.2.jar:6.8.2]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:681) [elasticsearch-6.8.2.jar:6.8.2]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
at java.lang.Thread.run(Thread.java:835) [?:?]
[2020-04-12T05:17:00,303][INFO ][o.e.c.s.ClusterApplierService] [elk-elasticsearch-client-97d8dd99f-4sfgt] detected_master {elk-elasticsearch-master-0}{eKU_y64SRUCrtkjqCyYm-A}{_QV8vlJZRpe4AS-dFszgAQ}{10.233.69.110}{10.233.69.110:9300}, added {{elk-elasticsearch-master-0}{eKU_y64SRUCrtkjqCyYm-A}{_QV8vlJZRpe4AS-dFszgAQ}{10.233.69.110}{10.233.69.110:9300},{elk-elasticsearch-data-0}{IiuiDnktTcG_lW5rHIy8Ng}{jWLI0IusTEqv1RZBs5XndA}{10.233.69.119}{10.233.69.119:9300},{elk-elasticsearch-master-1}{fj0uI-ibTXeM8IDM6Uh3Tw}{vmWKBJwpQKyPHwhA8_Ur2Q}{10.233.69.129}{10.233.69.129:9300},}, reason: apply cluster state (from master [master {elk-elasticsearch-master-0}{eKU_y64SRUCrtkjqCyYm-A}{_QV8vlJZRpe4AS-dFszgAQ}{10.233.69.110}{10.233.69.110:9300} committed version [1]])
[2020-04-12T05:17:00,488][INFO ][o.e.c.s.ClusterApplierService] [elk-elasticsearch-client-97d8dd99f-4sfgt] added {{elk-elasticsearch-client-97d8dd99f-cl9x4}{-4nfJE9qTX-lczcjH2cbAA}{Xw4bc3ahTj-wEs64a10mbw}{10.233.69.109}{10.233.69.109:9300},}, reason: apply cluster state (from master [master {elk-elasticsearch-master-0}{eKU_y64SRUCrtkjqCyYm-A}{_QV8vlJZRpe4AS-dFszgAQ}{10.233.69.110}{10.233.69.110:9300} committed version [2]])
[2020-04-12T05:17:01,826][INFO ][o.e.c.s.ClusterApplierService] [elk-elasticsearch-client-97d8dd99f-4sfgt] added {{elk-elasticsearch-data-1}{YzzantPxRd23JU3UfED4LA}{gibMh6t2SH-cp3IvZFwrJw}{10.233.69.123}{10.233.69.123:9300},}, reason: apply cluster state (from master [master {elk-elasticsearch-master-0}{eKU_y64SRUCrtkjqCyYm-A}{_QV8vlJZRpe4AS-dFszgAQ}{10.233.69.110}{10.233.69.110:9300} committed version [5]])
[2020-04-12T05:17:33,824][WARN ][o.e.d.r.a.a.i.RestGetMappingAction] [elk-elasticsearch-client-97d8dd99f-4sfgt] [types removal] The parameter include_type_name should be explicitly specified in get mapping requests to prepare for 7.0. In 7.0 include_type_name will default to 'false', which means responses will omit the type name in mapping definitions.
[2020-04-12T05:17:33,828][WARN ][o.e.d.r.a.a.i.RestGetIndexTemplateAction] [elk-elasticsearch-client-97d8dd99f-4sfgt] [types removal] The parameter include_type_name should be explicitly specified in get template requests to prepare for 7.0. In 7.0 include_type_name will default to 'false', which means responses will omit the type name in mapping definitions.
[2020-04-12T05:17:38,309][INFO ][o.e.c.s.ClusterApplierService] [elk-elasticsearch-client-97d8dd99f-4sfgt] added {{elk-elasticsearch-master-2}{Vxdt5B3LQAGjykP6AKiJoQ}{ZBLoOGeJQkem4_vI2rKtPw}{10.233.69.142}{10.233.69.142:9300},}, reason: apply cluster state (from master [master {elk-elasticsearch-master-0}{eKU_y64SRUCrtkjqCyYm-A}{_QV8vlJZRpe4AS-dFszgAQ}{10.233.69.110}{10.233.69.110:9300} committed version [9]])
curl example.com:30001
{
"name" : "elk-elasticsearch-client-97d8dd99f-4sfgt",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "M04WseoxQty2lQ7qJmmlOw",
"version" : {
"number" : "6.8.2",
"build_flavor" : "oss",
"build_type" : "docker",
"build_hash" : "b506955",
"build_date" : "2019-07-24T15:24:41.545295Z",
"build_snapshot" : false,
"lucene_version" : "7.7.0",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
Как найти больше информация об индексах в моем кластере Elasticsearch?