Сеанс сердцебиения истек, координатор умер - PullRequest
0 голосов
/ 30 января 2019

Когда я использую библиотеку python-kafka (1.4.4) для получения сообщения (Kafka version 1.1.0, Python 3.7). Она выдает эту ошибку снова и снова. И я не знаю, где происходит ошибка, здесьмой код Python, начальный потребитель:

consumer = KafkaConsumer('dolphin-spider-google-book-bookinfo',
                         bootstrap_servers=['mq-server:9092'],
                         group_id = "google-book",
                         client_id = "dolphin-pipline-google-bookinfo-consumer-foolman",
                         # Manage kafka offsets manual
                         enable_auto_commit = False,
                         consumer_timeout_ms=50000,
                         # consume from beginning
                         auto_offset_reset = "earliest",
                         max_poll_interval_ms =350000,
                         session_timeout_ms = 60000,
                         request_timeout_ms = 700000
                         ) 

Вот логика потребления:

def consume_bookinfo(self):
        while True:
            try:
                for books in self.consumer:
                    logger.info("Get books info offset: %s" ,books.offset)                    
                    self.sub_process_handle(books.value,books.offset)                    
            except Exception as e:
                logger.error(e)

    def sub_process_handle(self,bookinfo,offset):     
        number_of_threadings = len(threading.enumerate())
        if(number_of_threadings < 13):
            t = threading.Thread(target=self.background_process,name="offset-" + str(offset), args=(bookinfo,), kwargs={})
            t.start()
        else:
            # If all threading running
            # Using main thread to handle
            # Slow down kafka consume speed
            logger.info("Reach max handle thread,sleep 20s to wait thread release...")
            time.sleep(20)            
            self.sub_process_handle(bookinfo,offset)

    def background_process(self,bookinfo):        
        self.parse_bookinfo(bookinfo)
        self.consumer.commit_async(callback=self.offset_commit_result) 

Я запускаю многопоточность для обработки логики потребления.ошибка:

2019-01-30 02:46:52,948 - /home/dolphin/source/dolphin-pipline/dolphin/biz/spider_bookinfo_consumer.py[line:37] - INFO: Get books info offset: 9304
2019-01-30 02:46:52,948 - /home/dolphin/source/dolphin-pipline/dolphin/biz/spider_bookinfo_consumer.py[line:51] - INFO: Reach max handle thread,sleep 20s to wait thread release...
2019-01-30 02:47:12,968 - /home/dolphin/source/dolphin-pipline/dolphin/biz/spider_bookinfo_consumer.py[line:61] - INFO: commit offset success,offsets: {TopicPartition(topic='dolphin-spider-google-book-bookinfo', partition=0): OffsetAndMetadata(offset=9305, metadata='')}
2019-01-30 04:27:47,322 - /usr/local/lib/python3.5/site-packages/kafka/coordinator/base.py[line:964] - WARNING: Heartbeat session expired, marking coordinator dead
2019-01-30 04:27:47,323 - /usr/local/lib/python3.5/site-packages/kafka/coordinator/base.py[line:698] - WARNING: Marking the coordinator dead (node 0) for group google-book: Heartbeat session expired.
2019-01-30 04:27:47,433 - /usr/local/lib/python3.5/site-packages/kafka/cluster.py[line:353] - INFO: Group coordinator for google-book is BrokerMetadata(nodeId=0, host='35.229.69.193', port=9092, rack=None)
2019-01-30 04:27:47,433 - /usr/local/lib/python3.5/site-packages/kafka/coordinator/base.py[line:676] - INFO: Discovered coordinator 0 for group google-book
2019-01-30 04:27:47,433 - /usr/local/lib/python3.5/site-packages/kafka/coordinator/consumer.py[line:341] - INFO: Revoking previously assigned partitions {TopicPartition(topic='dolphin-spider-google-book-bookinfo', partition=0)} for group google-book
2019-01-30 04:27:47,433 - /usr/local/lib/python3.5/site-packages/kafka/coordinator/base.py[line:434] - INFO: (Re-)joining group google-book
2019-01-30 04:27:47,437 - /usr/local/lib/python3.5/site-packages/kafka/coordinator/base.py[line:504] - INFO: Elected group leader -- performing partition assignments using range
2019-01-30 04:27:47,439 - /usr/local/lib/python3.5/site-packages/kafka/coordinator/base.py[line:333] - INFO: Successfully joined group google-book with generation 470
2019-01-30 04:27:47,439 - /usr/local/lib/python3.5/site-packages/kafka/consumer/subscription_state.py[line:257] - INFO: Updated partition assignment: [TopicPartition(topic='dolphin-spider-google-book-bookinfo', partition=0)]
2019-01-30 04:27:47,439 - /usr/local/lib/python3.5/site-packages/kafka/coordinator/consumer.py[line:238] - INFO: Setting newly assigned partitions {TopicPartition(topic='dolphin-spider-google-book-bookinfo', partition=0)} for group google-book
2019-01-30 04:27:47,694 - /home/dolphin/source/dolphin-pipline/dolphin/biz/spider_bookinfo_consumer.py[line:63] - ERROR: commit offset failed,detail: CommitFailedError: Commit cannot be completed since the group has already
            rebalanced and assigned the partitions to another member.
            This means that the time between subsequent calls to poll()
            was longer than the configured max_poll_interval_ms, which
            typically implies that the poll loop is spending too much
            time message processing. You can address this either by
            increasing the rebalance timeout with max_poll_interval_ms,
            or by reducing the maximum size of batches returned in poll()
            with max_poll_records.

2019-01-30 04:27:47,694 - /home/dolphin/source/dolphin-pipline/dolphin/biz/spider_bookinfo_consumer.py[line:63] - ERROR: commit offset failed,detail: CommitFailedError: Commit cannot be completed since the group has already
            rebalanced and assigned the partitions to another member.
            This means that the time between subsequent calls to poll()
            was longer than the configured max_poll_interval_ms, which
            typically implies that the poll loop is spending too much
            time message processing. You can address this either by
            increasing the rebalance timeout with max_poll_interval_ms,
            or by reducing the maximum size of batches returned in poll()
            with max_poll_records.

Как избежать этой проблемы? Что мне делать?

1 Ответ

0 голосов
/ 31 января 2019

Настройка функции опроса потребителей:

def consume_bookinfo(self):
        while True:
            try:
                msg_pack = self.consumer.poll(timeout_ms=5000,max_records=1)
                for messages in msg_pack.items():
                    for message in messages:
                        var_type = type(message)
                        if(isinstance(message,TopicPartition)):
                            logger.info("TopicPartition: %s", TopicPartition)
                        if(var_type == list):
                            for consumer_record in message:
                                #for books in self.consumer.poll(max_records = 5):
                                logger.info("Get books info offset: %s" ,consumer_record.offset)                    
                                self.sub_process_handle(consumer_record.value,consumer_record.offset)                    
            except Exception as e:
                logger.error(e)

Это прекрасно работает для меня !!!!!!!!!!!!!!!!

Добро пожаловать на сайт PullRequest, где вы можете задавать вопросы и получать ответы от других членов сообщества.
...