Как настроить кафку кластера используя сливные образы докера - PullRequest
1 голос
/ 06 июня 2019

Я попытался настроить 3-х узловый кластер kafka, используя сливные образы докера.

https://hub.docker.com/r/confluentinc/cp-kafka

https://hub.docker.com/r/confluentinc/cp-zookeeper

docker run -d --restart always --name zk-1 -e zk_id=1 -e zk_server.1=ip1:2888:3888 -e zk_server.2=ip2:2888:3888 -e zk_server.3=ip3:2888:3888 -e ZOOKEEPER_CLIENT_PORT=2181 -p 2181:2181 -p 2888:2888 -p 3888:3888 confluentinc/cp-zookeeper:5.2.1-1
docker run -d --restart always --name zk-2 -e zk_id=2 -e zk_server.1=ip1:2888:3888 -e zk_server.2=ip2:2888:3888 -e zk_server.3=ip3:2888:3888 -e ZOOKEEPER_CLIENT_PORT=2181 -p 2181:2181 -p 2888:2888 -p 3888:3888 confluentinc/cp-zookeeper:5.2.1-1
docker run -d --restart always --name zk-3 -e zk_id=3 -e zk_server.1=ip1:2888:3888 -e zk_server.2=ip2:2888:3888 -e zk_server.3=ip3:2888:3888 -e ZOOKEEPER_CLIENT_PORT=2181 -p 2181:2181 -p 2888:2888 -p 3888:3888 confluentinc/cp-zookeeper:5.2.1-1
docker run -d --restart always --name kafka-1 -e KAFKA_BROKER_ID=1 -e KAFKA_ZOOKEEPER_CONNECT=ip1:2181,ip2:2181,ip3:2181 -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://ip1:9092 -p 9092:9092 confluentinc/cp-kafka:5.2.1-1
docker run -d --restart always --name kafka-2 -e KAFKA_BROKER_ID=2 -e KAFKA_ZOOKEEPER_CONNECT=ip1:2181,ip2:2181,ip3:2181 -p 9092:9092 -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://ip2:9092 confluentinc/cp-kafka:5.2.1-1
docker run -d --restart always --name kafka-3 -e KAFKA_BROKER_ID=3 -e KAFKA_ZOOKEEPER_CONNECT=ip1:2181,ip2:2181,ip3:2181 -p 9092:9092 -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://ip3:9092 confluentinc/cp-kafka:5.2.1-1

Но когда я формирую кластер, я вижу 1 или 2 брокера вместо всех 3 брокеров, которых я сформировал.

docker run -it --net host confluentinc/cp-kafkacat kafkacat -b localhost:9092 -L

Нужно ли было устанавливать свойство bootstrap.server при формировании кластера, если это не так, я не вижу его в упомянутой документации .

Журналы

[2019-06-06 06:12:06,788] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2019-06-06 06:12:06,793] INFO [ExpirationReaper-1-ElectPreferredLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2019-06-06 06:12:06,801] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler)
[2019-06-06 06:12:06,850] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient)
[2019-06-06 06:12:06,868] INFO Stat of the created znode at /brokers/ids/1 is: 565,565,1559801526862,1559801526862,1,0,0,72057595854848009,196,0,565
 (kafka.zk.KafkaZkClient)
[2019-06-06 06:12:06,870] INFO Registered broker 1 at path /brokers/ids/1 with addresses: ArrayBuffer(EndPoint(ip1,9092,ListenerName(PLAINTEXT),PLAINTEXT)), czxid (broker epoch): 565 (kafka.zk.KafkaZkClient)
[2019-06-06 06:12:06,871] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint)
[2019-06-06 06:12:06,934] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread)
[2019-06-06 06:12:06,943] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2019-06-06 06:12:06,949] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2019-06-06 06:12:06,958] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2019-06-06 06:12:06,960] DEBUG [Controller id=1] Broker 2 has been elected as the controller, so stopping the election process. (kafka.controller.KafkaController)
[2019-06-06 06:12:06,966] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator)
[2019-06-06 06:12:06,966] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator)
[2019-06-06 06:12:06,970] INFO [GroupMetadataManager brokerId=1] Removed 0 expired offsets in 1 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-06-06 06:12:06,983] INFO [ProducerId Manager 1]: Acquired new producerId block (brokerId:1,blockStartProducerId:9000,blockEndProducerId:9999) by writing to Zk with path version 10 (kafka.coordinator.transaction.ProducerIdManager)
[2019-06-06 06:12:07,004] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator)
[2019-06-06 06:12:07,005] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator)
[2019-06-06 06:12:07,007] INFO [Transaction Marker Channel Manager 1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager)
[2019-06-06 06:12:07,031] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread)
[2019-06-06 06:12:07,044] INFO [SocketServer brokerId=1] Started data-plane processors for 1 acceptors (kafka.network.SocketServer)
[2019-06-06 06:12:07,048] INFO Kafka version: 2.2.0-cp2 (org.apache.kafka.common.utils.AppInfoParser)
[2019-06-06 06:12:07,068] INFO Kafka commitId: 00d486623990ed9d (org.apache.kafka.common.utils.AppInfoParser)
[2019-06-06 06:12:07,072] INFO [KafkaServer id=1] started (kafka.server.KafkaServer)
[2019-06-06 06:12:07,078] INFO Waiting until monitored service is ready for metrics collection (io.confluent.support.metrics.BaseMetricsReporter)
[2019-06-06 06:12:07,082] INFO Monitored service is now ready (io.confluent.support.metrics.BaseMetricsReporter)
[2019-06-06 06:12:07,082] INFO Attempting to collect and submit metrics (io.confluent.support.metrics.BaseMetricsReporter)
[2019-06-06 06:12:07,108] TRACE [Broker id=1] Cached leader info PartitionState(controllerEpoch=5, leader=2, leaderEpoch=1, isr=[2], zkVersion=1, replicas=[2], offlineReplicas=[]) for partition __confluent.support.metrics-0 in response to UpdateMetadata request sent by controller 2 epoch 6 with correlation id 0 (state.change.logger)
[2019-06-06 06:12:07,322] WARN The replication factor of topic __confluent.support.metrics is 1, which is less than the desired replication factor of 3.  If you happen to add more brokers to this cluster, then it is important to increase the replication factor of the topic to eventually 3 to ensure reliable and durable metrics collection. (io.confluent.support.metrics.common.kafka.KafkaUtilities)
[2019-06-06 06:12:07,334] INFO ProducerConfig values: 
    acks = 1
    batch.size = 16384
    bootstrap.servers = [PLAINTEXT://ip1:9092, PLAINTEXT://ip2:9092]
    buffer.memory = 33554432
    client.dns.lookup = default
    client.id = 
    compression.type = none
    connections.max.idle.ms = 540000
    delivery.timeout.ms = 120000
    enable.idempotence = false
    interceptor.classes = []
    key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
    linger.ms = 0
    max.block.ms = 10000
    max.in.flight.requests.per.connection = 5
    max.request.size = 1048576
    metadata.max.age.ms = 300000
    metric.reporters = []
    metrics.num.samples = 2
    metrics.recording.level = INFO
    metrics.sample.window.ms = 30000
    partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
    receive.buffer.bytes = 32768
    reconnect.backoff.max.ms = 1000
    reconnect.backoff.ms = 50
    request.timeout.ms = 30000
    retries = 2147483647
    retry.backoff.ms = 100
    sasl.client.callback.handler.class = null
    sasl.jaas.config = null
    sasl.kerberos.kinit.cmd = /usr/bin/kinit
    sasl.kerberos.min.time.before.relogin = 60000
    sasl.kerberos.service.name = null
    sasl.kerberos.ticket.renew.jitter = 0.05
    sasl.kerberos.ticket.renew.window.factor = 0.8
    sasl.login.callback.handler.class = null
    sasl.login.class = null
    sasl.login.refresh.buffer.seconds = 300
    sasl.login.refresh.min.period.seconds = 60
    sasl.login.refresh.window.factor = 0.8
    sasl.login.refresh.window.jitter = 0.05
    sasl.mechanism = GSSAPI
    security.protocol = PLAINTEXT
    send.buffer.bytes = 131072
    ssl.cipher.suites = null
    ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
    ssl.endpoint.identification.algorithm = https
    ssl.key.password = null
    ssl.keymanager.algorithm = SunX509
    ssl.keystore.location = null
    ssl.keystore.password = null
    ssl.keystore.type = JKS
    ssl.protocol = TLS
    ssl.provider = null
    ssl.secure.random.implementation = null
    ssl.trustmanager.algorithm = PKIX
    ssl.truststore.location = null
    ssl.truststore.password = null
    ssl.truststore.type = JKS
    transaction.timeout.ms = 60000
    transactional.id = null
    value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
 (org.apache.kafka.clients.producer.ProducerConfig)
[2019-06-06 06:12:07,366] INFO Kafka version: 2.2.0-cp2 (org.apache.kafka.common.utils.AppInfoParser)
[2019-06-06 06:12:07,366] INFO Kafka commitId: 00d486623990ed9d (org.apache.kafka.common.utils.AppInfoParser)
[2019-06-06 06:12:07,399] INFO Cluster ID: 0DmmTlnXQMGD52urD7rxuA (org.apache.kafka.clients.Metadata)
[2019-06-06 06:12:07,449] INFO [Producer clientId=producer-1] Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms. (org.apache.kafka.clients.producer.KafkaProducer)
[2019-06-06 06:12:07,459] INFO Successfully submitted metrics to Kafka topic __confluent.support.metrics (io.confluent.support.metrics.submitters.KafkaSubmitter)
[2019-06-06 06:12:08,470] INFO Successfully submitted metrics to Confluent via secure endpoint (io.confluent.support.metrics.submitters.ConfluentSubmitter)

Как видите, bootstrap.servers автоматически заполняется до ip1 и ip2. Пока ip3 отсутствует.

1 Ответ

1 голос
/ 06 июня 2019

Итак, прежде всего, это уже существует в файле Compose

https://github.com/confluentinc/cp-docker-images/blob/5.1.2-post/examples/kafka-cluster/docker-compose.yml

Тем не менее, я думаю, что предполагается, что вашим хостом является Linux, поскольку именно там network: host будет работать, как и ожидалось


В любом случае некоторые заметки

  • Например, ip1 не существует ... это должно быть zk-1 имя хоста этого контейнера
  • Начните проще. Запустите одного брокера, и вы сможете управлять несколькими брокерами с одним Zookeeper
  • Вы не можете сопоставить одни и те же порты на одном и том же хосте ... смотрите каждое использование -p 2181:2181 -p 2888:2888 -p 3888:3888 (2888 и 3888 фактически не нуждаются в доступе к вашему хосту), аналогично для -p 9092:9092
  • --net host не нужно использовать kafkacat, если вы добавили все контейнеры к одному --network (как это сделает Docker compose)
  • Запуск множества брокеров на одном хост-компьютере не добавляет многих «преимуществ»
...