Есть идеи, почему этот брокер Kafka разбился? Бревна внутри. Я не вижу здесь ничего очевидного - PullRequest
1 голос
/ 16 июня 2020

Этот брокер находится в контейнере Docker, и есть 2 других Dockerized-брокера, работающих с docker -compose. Этот разбился, и я не могу понять, что с ним случилось. Это журналы, которые были опубликованы примерно в то время:


kafka1                          | [2020-06-16 16:24:40,196] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 3 epoch 1 starting the become-leader transition for partition telegraf-0 (state.change.logger)
kafka1                          | [2020-06-16 16:24:40,207] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set(telegraf-0) (kafka.server.ReplicaFetcherManager)
kafka1                          | [2020-06-16 16:24:40,223] INFO [Log partition=telegraf-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
kafka1                          | [2020-06-16 16:24:40,224] INFO [Log partition=telegraf-0, dir=/var/lib/kafka/data] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log)
kafka1                          | [2020-06-16 16:24:40,226] INFO Created log for partition telegraf-0 in /var/lib/kafka/data/telegraf-0 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> [delete], flush.ms -> 9223372036854775807, segment.bytes -> 1073741824, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.5-IV0, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1048588, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
kafka1                          | [2020-06-16 16:24:40,226] INFO [Partition telegraf-0 broker=1] No checkpointed highwatermark is found for partition telegraf-0 (kafka.cluster.Partition)
kafka1                          | [2020-06-16 16:24:40,227] INFO [Partition telegraf-0 broker=1] Log loaded for partition telegraf-0 with initial high watermark 0 (kafka.cluster.Partition)
kafka1                          | [2020-06-16 16:24:40,231] INFO [Partition telegraf-0 broker=1] telegraf-0 starts at leader epoch 0 from offset 0 with high watermark 0. Previous leader epoch was -1. (kafka.cluster.Partition)
kafka1                          | [2020-06-16 16:24:40,253] TRACE [Broker id=1] Stopped fetchers as part of become-leader request from controller 3 epoch 1 with correlation id 3 for partition telegraf-0 (last update controller epoch 1) (state.change.logger)
kafka1                          | [2020-06-16 16:24:40,257] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 3 epoch 1 for the become-leader transition for partition telegraf-0 (state.change.logger)
kafka1                          | [2020-06-16 16:24:40,262] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='telegraf', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition telegraf-0 in response to UpdateMetadata request sent by controller 3 epoch 1 with correlation id 4 (state.change.logger)
kafka1                          | [2020-06-16 16:33:53,250] INFO [GroupMetadataManager brokerId=1] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
kafka1                          | [2020-06-16 16:43:52,558] INFO [GroupMetadataManager brokerId=1] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
kafka1                          | [2020-06-16 16:53:51,868] INFO [GroupMetadataManager brokerId=1] Removed 0 expired offsets in 1 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
kafka1                          | [2020-06-16 17:03:51,178] INFO [GroupMetadataManager brokerId=1] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
kafka1                          | [2020-06-16 17:13:50,488] INFO [GroupMetadataManager brokerId=1] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
kafka1 exited with code 137

1 Ответ

1 голос
/ 17 июня 2020

Код выхода 137 обычно указывает на то, что процесс был убит через SIGKILL.

Это происходит, если пользователь выполняет kill -9 или процесс убит oom-killer.

См. https://success.docker.com/article/what-causes-a-container-to-exit-with-code-137 для справки

...