Kafka Consumer выдает чрезмерное утверждение DEBUG - PullRequest
0 голосов
/ 18 сентября 2018

Я сталкиваюсь с некоторыми проблемами, связанными с количеством журналов, созданных в моей службе, работающей в кластере K8s.

Эта проблема аналогична описанной здесь , но я не могучтобы исправить проблему.Мой проект использует Akka и Log4j2, и я не знаю, как это исправить, следуя также советам, о которых сообщалось в предыдущем посте.

Вот конфигурация, которая у меня есть для log4j2 и для application.conf, для akka.

<Configuration status="WARN">
  <Appenders>
    <Console name="Console" target="SYSTEM_OUT">
      <PatternLayout pattern="%d{DEFAULT} [%t] %-5level %logger{1}.%method - %msg%n"/>
    </Console>
  </Appenders>
  <Loggers>
    <Root level="info">
      <AppenderRef ref="Console"/>
    </Root>
  </Loggers>
</Configuration>

Вместо Акки есть:

akka {

  # Options: OFF, ERROR, WARNING, INFO, DEBUG
  loglevel = "ERROR"

  # Log level for the very basic logger activated during ActorSystem startup.
  # This logger prints the log messages to stdout (System.out).
  # Options: OFF, ERROR, WARNING, INFO, DEBUG
  stdout-loglevel = "ERROR"

  # Log the complete configuration at INFO level when the actor system is started.
  # This is useful when you are uncertain of what configuration is used.
  log-config-on-start = off



    # Properties for akka.kafka.ConsumerSettings can be
    # defined in this section or a configuration section with
    # the same layout. 

      kafka.consumer {
          # Tuning property of scheduled polls.
          poll-interval = 500ms

          # Tuning property of the `KafkaConsumer.poll` parameter.
          # Note that non-zero value means that the thread that
          # is executing the stage will be blocked.
          poll-timeout = 500ms

          # The stage will await outstanding offset commit requests before
          # shutting down, but if that takes longer than this timeout it will
          # stop forcefully.
          stop-timeout = 30s

          # How long to wait for `KafkaConsumer.close`
          close-timeout = 20s

          # If offset commit requests are not completed within this timeout
          # the returned Future is completed `CommitTimeoutException`.
          commit-timeout = 15s

          # If commits take longer than this time a warning is logged
          commit-time-warning = 1s

          # If for any reason `KafkaConsumer.poll` blocks for longer than the configured
          # poll-timeout then it is forcefully woken up with `KafkaConsumer.wakeup`.
          # The KafkaConsumerActor will throw
          # `org.apache.kafka.common.errors.WakeupException` which will be ignored
          # until `max-wakeups` limit gets exceeded.
          wakeup-timeout = 6s

          # After exceeding maxinum wakeups the consumer will stop and the stage and fail.
          # Setting it to 0 will let it ignore the wakeups and try to get the polling done forever.
          max-wakeups = 10

          # If set to a finite duration, the consumer will re-send the last committed offsets periodically
          # for all assigned partitions. See https://issues.apache.org/jira/browse/KAFKA-4682.
          commit-refresh-interval = infinite

          # If enabled, log stack traces before waking up the KafkaConsumer to give
          # some indication why the KafkaConsumer is not honouring the `poll-timeout`
          wakeup-debug = true

          # Fully qualified config path which holds the dispatcher configuration
          # to be used by the KafkaConsumerActor. Some blocking may occur.
          #use-dispatcher = "akka.kafka.default-dispatcher"


          # Time to wait for pending requests when a partition is closed
          wait-close-partition = 500ms
    }

}

Но я всегда вижу следующие журналы:

17:00:25.346 [ActorSystem-akka.kafka.default-dispatcher-19] DEBUG o.a.k.clients.consumer.KafkaConsumer - Resuming partition Licenses-1
17:00:25.346 [ActorSystem-akka.kafka.default-dispatcher-19] DEBUG o.a.k.clients.consumer.KafkaConsumer - Resuming partition Licenses-0
17:00:25.346 [ActorSystem-akka.kafka.default-dispatcher-19] DEBUG o.a.k.clients.consumer.KafkaConsumer - Resuming partition Licenses-2
17:00:25.346 [ActorSystem-akka.kafka.default-dispatcher-19] DEBUG o.a.k.c.consumer.internals.Fetcher - Sending fetch for partitions [Licenses-0] to broker eric-data-message-bus-kf-0.eric-data-message-bus-kf.default:9092 (id: 0 rack: null)
17:00:25.346 [ActorSystem-akka.kafka.default-dispatcher-19] DEBUG o.a.k.c.consumer.internals.Fetcher - Sending fetch for partitions [Licenses-1] to broker eric-data-message-bus-kf-1.eric-data-message-bus-kf.default:9092 (id: 1 rack: null)
17:00:25.346 [ActorSystem-akka.kafka.default-dispatcher-19] DEBUG o.a.k.c.consumer.internals.Fetcher - Sending fetch for partitions [Licenses-2] to broker eric-data-message-bus-kf-2.eric-data-message-bus-kf.default:9092 (id: 2 rack: null)
17:00:25.427 [kafka-coordinator-heartbeat-thread | GroupIDTest] DEBUG o.a.k.c.c.i.AbstractCoordinator - Sending Heartbeat request for group GroupIDTest to coordinator eric-data-message-bus-kf-0.eric-data-message-bus-kf.default:9092 (id: 2147483647 rack: null)
17:00:25.428 [ActorSystem-akka.kafka.default-dispatcher-19] DEBUG o.a.k.c.c.i.AbstractCoordinator - Received successful Heartbeat response for group GroupIDTest
17:00:26.365 [ActorSystem-akka.kafka.default-dispatcher-20] DEBUG o.a.k.clients.consumer.KafkaConsumer - Resuming partition Licenses-1
17:00:26.365 [ActorSystem-akka.kafka.default-dispatcher-20] DEBUG o.a.k.clients.consumer.KafkaConsumer - Resuming partition Licenses-0
17:00:26.365 [ActorSystem-akka.kafka.default-dispatcher-20] DEBUG o.a.k.clients.consumer.KafkaConsumer - Resuming partition Licenses-2
17:00:26.365 [ActorSystem-akka.kafka.default-dispatcher-20] DEBUG o.a.k.c.consumer.internals.Fetcher - Sending fetch for partitions [Licenses-0] to broker eric-data-message-bus-kf-0.eric-data-message-bus-kf.default:9092 (id: 0 rack: null)
17:00:26.365 [ActorSystem-akka.kafka.default-dispatcher-20] DEBUG o.a.k.c.consumer.internals.Fetcher - Sending fetch for partitions [Licenses-1] to broker eric-data-message-bus-kf-1.eric-data-message-bus-kf.default:9092 (id: 1 rack: null)
17:00:26.365 [ActorSystem-akka.kafka.default-dispatcher-20] DEBUG o.a.k.c.consumer.internals.Fetcher - Sending fetch for partitions [Licenses-2] to broker eric-data-message-bus-kf-2.eric-data-message-bus-kf.default:9092 (id: 2 rack: null)

Есть предложения?

1 Ответ

0 голосов
/ 19 сентября 2018

На самом деле библиотека Kafka внутренне использует slf4j-log4j12, которая внутренне использует log4j в качестве базовой структуры ведения журнала.

Так что вам нужно исключить это из вашего файла pom или sbt из kafka_2.10 / kafka_2.11 и артефакта kafka-client / zookeeper, если он упоминается, или из любых других мест в файле pom / sbt проекта и поместить slf4j-Зависимость log4j12 явно в pom или sbt и поместите ваш log4j.xml в папку src / main / resources с уровнем в качестве информации, и вы избавитесь от всех операторов отладки.

Пример в pom.xml:

<dependency>
    <groupId>org.slf4j</groupId>
    <artifactId>slf4j-log4j12</artifactId>
    <version>1.7.5</version>
</dependency>

       <dependency>
            <groupId>org.apache.kafka</groupId>
            <artifactId>kafka_2.11</artifactId>
            <version>1.0.0</version>
            <exclusions>
                <exclusion>
                    <groupId>org.slf4j</groupId>
                    <artifactId>slf4j-log4j12</artifactId>
                </exclusion>
                <exclusion>
                    <groupId>log4j</groupId>
                    <artifactId>log4j</artifactId>
                </exclusion>
            </exclusions>
        </dependency>

в build.sbt:

exclude("org.slf4j","slf4j-log4j12) для каждой строки libraryDepencies.

...