Исключительная ситуация при загрузке контекста входа в систему Zookeeper JAAS. Не удалось найти запись «KafkaServer» или «sasl_plaintext.KafkaServer» в JAAS. - PullRequest
0 голосов
/ 30 марта 2020

Я следую этому учебнику, чтобы настроить безопасность моего брокера kafka, и я застрял после реализации аутентификации sasl_ssl. Вот то, что я сделал.

  1. Скачано Этот проект конфигурации github .
  2. Переместил папку keystore и truststore в мою Apache папку конфигурации Kafka .
  3. Добавлен kafka_server_jaas.conf файл в папке конфигурации со следующими настройками:

    KafkaServer {org. apache .kafka.common.security.scram.ScramLoginModule required username = "admin "password =" admin-secret "; };

  4. Обновлены свойства server.properties с этим

    ##### SECURITY с использованием прослушивателей SCRAM-SHA-512 и SSL

    = PLAINTEXT: // localhost: 9092 , SASL_PLAINTEXT: // localhost: 9093, SASL_SSL: // localhost: 9094 advertised.listeners = PLAINTEXT: // localhost: 9092, SASL_PLAINTEXT: // localhost: 9093, SASL_SSL: // localhost: 9094 security.inter.broker.protocol = SASL_SSL ssl.endpoint.identification.algorithm = ssl.client.auth = требуется sasl.mechanism.inter.broker.protocol = SCRAM-SHA-512 sasl.enabled.mechanisms = SCRAM-SHA-512

    Брокер настройки безопасности

    ssl.truststore.location = truststore / kafka.truststore.jks ssl.truststore.password = пароль ssl.keystore.location = keystore / kafka.keystore.jks ssl.keystore.password = пароль ssl. key.password = пароль

    ACL

    authorizer.class.name = kafka.security.auth.SimpleAclAuthorizer super.users = Пользователь: admin

    zookeeper SASL

    zookeeper.set.acl = false

    ##### БЕЗОПАСНОСТЬ с использованием SCRAM-SHA-512 и SSL
  5. добавлен файл ssl-user-config.properties в конфигурации

    security.protocol = SASL_SSL sasl.mechanism = SCRAM-SHA-512 sasl.jaas.config = org. apache .kafka.common .security.scram.ScramLoginModule требуется username = "demouser" password = "secret"; ssl.truststore.location = truststore / kafka.truststore.jks ssl.truststore.password = пароль

  6. запустите Zookeeper и затем с помощью этой команды создали суперпользователя.

    ./bin/kafka-configs.sh --zookeeper localhost: 2181 --alter --add-config 'SCRAM-SHA-512 = [пароль =' admin-secret ']' - пользователи типа объекта - имя-сущности admin Завершено Обновление конфигурации для сущности: пользователь-принципал 'admin'.

  7. Теперь я пытаюсь запустить сервер Kafka. с этим файлом sh, как описано здесь

    export KAFKA_OPTS = -D java .security.auth.login.config = kafka_2.13-2.4.1 / config / kafka_server_jaas.conf bin / windows / kafka-server-start.bat config / server.properties

и получение вывода

$ sh KafkSSLserver.sh
[2020-03-30 14:28:27,863] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[2020-03-30 14:28:29,920] INFO starting (kafka.server.KafkaServer)
[2020-03-30 14:28:29,925] INFO Connecting to zookeeper on localhost:2181 (kafka.server.KafkaServer)
[2020-03-30 14:28:29,975] ERROR Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
org.apache.kafka.common.KafkaException: Exception while loading Zookeeper JAAS login context [java.security.auth.login.config=kafka_2.13-2.4.1/config/kafka_server_jaas.conf, zookeeper.sasl.client=default:true, zookeeper.sasl.clientconfig=default:Client]
        at org.apache.kafka.common.security.JaasUtils.isZkSecurityEnabled(JaasUtils.java:64)
        at kafka.server.KafkaServer.initZkClient(KafkaServer.scala:384)
        at kafka.server.KafkaServer.startup(KafkaServer.scala:207)
        at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:44)
        at kafka.Kafka$.main(Kafka.scala:84)
        at kafka.Kafka.main(Kafka.scala)
Caused by: java.lang.SecurityException: java.io.IOException: kafka_2.13-2.4.1/config/kafka_server_jaas.conf (No such file or directory)
        at sun.security.provider.ConfigFile$Spi.<init>(Unknown Source)
        at sun.security.provider.ConfigFile.<init>(Unknown Source)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(Unknown Source)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown Source)
        at java.lang.reflect.Constructor.newInstance(Unknown Source)
        at java.lang.Class.newInstance(Unknown Source)
        at javax.security.auth.login.Configuration$2.run(Unknown Source)
        at javax.security.auth.login.Configuration$2.run(Unknown Source)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.login.Configuration.getConfiguration(Unknown Source)
        at org.apache.kafka.common.security.JaasUtils.isZkSecurityEnabled(JaasUtils.java:60)
        ... 5 more
Caused by: java.io.IOException: kafka_2.13-2.4.1/config/kafka_server_jaas.conf (No such file or directory)
        at sun.security.provider.ConfigFile$Spi.ioException(Unknown Source)
        at sun.security.provider.ConfigFile$Spi.init(Unknown Source)
        ... 17 more
[2020-03-30 14:28:29,989] INFO shutting down (kafka.server.KafkaServer)
[2020-03-30 14:28:30,024] INFO shut down completed (kafka.server.KafkaServer)
[2020-03-30 14:28:30,028] ERROR Exiting Kafka. (kafka.server.KafkaServerStartable)
[2020-03-30 14:28:30,038] INFO shutting down (kafka.server.KafkaServer)

Тем временем я пытался запустить файл .bat

export KAFKA_OPTS="-Djava.security.auth.login.config=config/kafka_server_jaas.conf"
start bin\windows\kafka-server-start.bat config\server.properties

Я получил это

       advertised.listeners = PLAINTEXT://localhost:9092,SASL_PLAINTEXT://local
host:9093,SASL_SSL://localhost:9094
        advertised.port = null
        alter.config.policy.class.name = null
        alter.log.dirs.replication.quota.window.num = 11
        alter.log.dirs.replication.quota.window.size.seconds = 1
        authorizer.class.name = kafka.security.auth.SimpleAclAuthorizer
        auto.create.topics.enable = true
        auto.leader.rebalance.enable = true
        background.threads = 10
        broker.id = 0
        broker.id.generation.enable = true
        broker.rack = null
        client.quota.callback.class = null
        compression.type = producer
        connection.failed.authentication.delay.ms = 100
        connections.max.idle.ms = 600000
        connections.max.reauth.ms = 0
        control.plane.listener.name = null
        controlled.shutdown.enable = true
        controlled.shutdown.max.retries = 3
        controlled.shutdown.retry.backoff.ms = 5000
        controller.socket.timeout.ms = 30000
        create.topic.policy.class.name = null
        default.replication.factor = 1
        delegation.token.expiry.check.interval.ms = 3600000
        delegation.token.expiry.time.ms = 86400000
        delegation.token.master.key = null
        delegation.token.max.lifetime.ms = 604800000
        delete.records.purgatory.purge.interval.requests = 1
        delete.topic.enable = true
        fetch.purgatory.purge.interval.requests = 1000
        group.initial.rebalance.delay.ms = 0
        group.max.session.timeout.ms = 1800000
        group.max.size = 2147483647
        group.min.session.timeout.ms = 6000
        host.name =
        inter.broker.listener.name = null
        inter.broker.protocol.version = 2.4-IV1
        kafka.metrics.polling.interval.secs = 10
        kafka.metrics.reporters = []
        leader.imbalance.check.interval.seconds = 300
        leader.imbalance.per.broker.percentage = 10
        listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINT
EXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
        listeners = PLAINTEXT://localhost:9092,SASL_PLAINTEXT://localhost:9093,S
ASL_SSL://localhost:9094
        log.cleaner.backoff.ms = 15000
        log.cleaner.dedupe.buffer.size = 134217728
        log.cleaner.delete.retention.ms = 86400000
        log.cleaner.enable = true
        log.cleaner.io.buffer.load.factor = 0.9
        log.cleaner.io.buffer.size = 524288
        log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
        log.cleaner.max.compaction.lag.ms = 9223372036854775807
        log.cleaner.min.cleanable.ratio = 0.5
        log.cleaner.min.compaction.lag.ms = 0
        log.cleaner.threads = 1
        log.cleanup.policy = [delete]
        log.dir = /tmp/kafka-logs
        log.dirs = /tmp/kafka-logs
        log.flush.interval.messages = 9223372036854775807
        log.flush.interval.ms = null
        log.flush.offset.checkpoint.interval.ms = 60000
        log.flush.scheduler.interval.ms = 9223372036854775807
        log.flush.start.offset.checkpoint.interval.ms = 60000
        log.index.interval.bytes = 4096
        log.index.size.max.bytes = 10485760
        log.message.downconversion.enable = true
        log.message.format.version = 2.4-IV1
        log.message.timestamp.difference.max.ms = 9223372036854775807
        log.message.timestamp.type = CreateTime
        log.preallocate = false
        log.retention.bytes = -1
        log.retention.check.interval.ms = 300000
        log.retention.hours = 168
        log.retention.minutes = null
        log.retention.ms = null
        log.roll.hours = 168
        log.roll.jitter.hours = 0
        log.roll.jitter.ms = null
        log.roll.ms = null
        log.segment.bytes = 1073741824
        log.segment.delete.delay.ms = 60000
        max.connections = 2147483647
        max.connections.per.ip = 2147483647
        max.connections.per.ip.overrides =
        max.incremental.fetch.session.cache.slots = 1000
        message.max.bytes = 1000012
        metric.reporters = []
        metrics.num.samples = 2
        metrics.recording.level = INFO
        metrics.sample.window.ms = 30000
        min.insync.replicas = 1
        num.io.threads = 8
        num.network.threads = 3
        num.partitions = 1
        num.recovery.threads.per.data.dir = 1
        num.replica.alter.log.dirs.threads = null
        num.replica.fetchers = 1
        offset.metadata.max.bytes = 4096
        offsets.commit.required.acks = -1
        offsets.commit.timeout.ms = 5000
        offsets.load.buffer.size = 5242880
        offsets.retention.check.interval.ms = 600000
        offsets.retention.minutes = 10080
        offsets.topic.compression.codec = 0
        offsets.topic.num.partitions = 50
        offsets.topic.replication.factor = 1
        offsets.topic.segment.bytes = 104857600
        password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
        password.encoder.iterations = 4096
        password.encoder.key.length = 128
        password.encoder.keyfactory.algorithm = null
        password.encoder.old.secret = null
        password.encoder.secret = null
        port = 9092
        principal.builder.class = null
        producer.purgatory.purge.interval.requests = 1000
        queued.max.request.bytes = -1
        queued.max.requests = 500
        quota.consumer.default = 9223372036854775807
        quota.producer.default = 9223372036854775807
        quota.window.num = 11
        quota.window.size.seconds = 1
        replica.fetch.backoff.ms = 1000
        replica.fetch.max.bytes = 1048576
        replica.fetch.min.bytes = 1
        replica.fetch.response.max.bytes = 10485760
        replica.fetch.wait.max.ms = 500
        replica.high.watermark.checkpoint.interval.ms = 5000
        replica.lag.time.max.ms = 10000
        replica.selector.class = null
        replica.socket.receive.buffer.bytes = 65536
        replica.socket.timeout.ms = 30000
        replication.quota.window.num = 11
        replication.quota.window.size.seconds = 1
        request.timeout.ms = 30000
        reserved.broker.max.id = 1000
        sasl.client.callback.handler.class = null
        sasl.enabled.mechanisms = [SCRAM-SHA-512]
        sasl.jaas.config = null
        sasl.kerberos.kinit.cmd = /usr/bin/kinit
        sasl.kerberos.min.time.before.relogin = 60000
        sasl.kerberos.principal.to.local.rules = [DEFAULT]
        sasl.kerberos.service.name = null
        sasl.kerberos.ticket.renew.jitter = 0.05
        sasl.kerberos.ticket.renew.window.factor = 0.8
        sasl.login.callback.handler.class = null
        sasl.login.class = null
        sasl.login.refresh.buffer.seconds = 300
        sasl.login.refresh.min.period.seconds = 60
        sasl.login.refresh.window.factor = 0.8
        sasl.login.refresh.window.jitter = 0.05
        sasl.mechanism.inter.broker.protocol = SCRAM-SHA-512
        sasl.server.callback.handler.class = null
        security.inter.broker.protocol = SASL_SSL
        security.providers = null
        socket.receive.buffer.bytes = 102400
        socket.request.max.bytes = 104857600
        socket.send.buffer.bytes = 102400
        ssl.cipher.suites = []
        ssl.client.auth = required
        ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
        ssl.endpoint.identification.algorithm =
        ssl.key.password = [hidden]
        ssl.keymanager.algorithm = SunX509
        ssl.keystore.location =
        ssl.keystore.password = [hidden]
        ssl.keystore.type = JKS
        ssl.principal.mapping.rules = DEFAULT
        ssl.protocol = TLS
        ssl.provider = null
        ssl.secure.random.implementation = null
        ssl.trustmanager.algorithm = PKIX
        ssl.truststore.location =
        ssl.truststore.password = [hidden]
        ssl.truststore.type = JKS
        transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
        transaction.max.timeout.ms = 900000
        transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
        transaction.state.log.load.buffer.size = 5242880
        transaction.state.log.min.isr = 1
        transaction.state.log.num.partitions = 50
        transaction.state.log.replication.factor = 1
        transaction.state.log.segment.bytes = 104857600
        transactional.id.expiration.ms = 604800000
        unclean.leader.election.enable = false
        zookeeper.connect = localhost:2181
        zookeeper.connection.timeout.ms = 6000
        zookeeper.max.in.flight.requests = 10
        zookeeper.session.timeout.ms = 6000
        zookeeper.set.acl = false
        zookeeper.sync.time.ms = 2000
 (kafka.server.KafkaConfig)
[2020-03-30 14:30:39,969] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.s
erver.ClientQuotaManager$ThrottledChannelReaper)
[2020-03-30 14:30:39,969] INFO [ThrottledChannelReaper-Produce]: Starting (kafka
.server.ClientQuotaManager$ThrottledChannelReaper)
[2020-03-30 14:30:39,975] INFO [ThrottledChannelReaper-Request]: Starting (kafka
.server.ClientQuotaManager$ThrottledChannelReaper)
[2020-03-30 14:30:40,129] INFO Loading logs. (kafka.log.LogManager)
[2020-03-30 14:30:40,174] INFO Logs loading complete in 44 ms. (kafka.log.LogMan
ager)
[2020-03-30 14:30:40,243] INFO Starting log cleanup with a period of 300000 ms.
(kafka.log.LogManager)
[2020-03-30 14:30:40,260] INFO Starting log flusher with a default period of 922
3372036854775807 ms. (kafka.log.LogManager)
log4j:ERROR Failed to rename [C:\ApacheKafka\kafka_2.13-2.4.1/logs/log-cleaner.l
og] to [C:\ApacheKafka\kafka_2.13-2.4.1/logs/log-cleaner.log.2020-03-30-13].
[2020-03-30 14:30:42,398] INFO Awaiting socket connections on localhost:9092. (k
afka.network.Acceptor)
[2020-03-30 14:30:42,669] INFO [SocketServer brokerId=0] Created data-plane acce
ptor and processors for endpoint : EndPoint(localhost,9092,ListenerName(PLAINTEX
T),PLAINTEXT) (kafka.network.SocketServer)
[2020-03-30 14:30:42,672] INFO Awaiting socket connections on localhost:9093. (k
afka.network.Acceptor)
[2020-03-30 14:30:42,694] ERROR [KafkaServer id=0] Fatal error during KafkaServe
r startup. Prepare to shutdown (kafka.server.KafkaServer)
java.lang.IllegalArgumentException: Could not find a 'KafkaServer' or 'sasl_plai
ntext.KafkaServer' entry in the JAAS configuration. System property 'java.securi
ty.auth.login.config' is not set
        at org.apache.kafka.common.security.JaasContext.defaultContext(JaasConte
xt.java:133)
        at org.apache.kafka.common.security.JaasContext.load(JaasContext.java:98
)
        at org.apache.kafka.common.security.JaasContext.loadServerContext(JaasCo
ntext.java:70)
        at org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilder
s.java:121)
        at org.apache.kafka.common.network.ChannelBuilders.serverChannelBuilder(
ChannelBuilders.java:85)
        at kafka.network.Processor.<init>(SocketServer.scala:753)
        at kafka.network.SocketServer.newProcessor(SocketServer.scala:394)
        at kafka.network.SocketServer.$anonfun$addDataPlaneProcessors$1(SocketSe
rver.scala:279)
        at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:190)
        at kafka.network.SocketServer.addDataPlaneProcessors(SocketServer.scala:
278)
        at kafka.network.SocketServer.$anonfun$createDataPlaneAcceptorsAndProces
sors$1(SocketServer.scala:241)
        at kafka.network.SocketServer.$anonfun$createDataPlaneAcceptorsAndProces
sors$1$adapted(SocketServer.scala:238)
        at scala.collection.IterableOnceOps.foreach(IterableOnce.scala:553)
        at scala.collection.IterableOnceOps.foreach$(IterableOnce.scala:551)
        at scala.collection.AbstractIterable.foreach(Iterable.scala:921)
        at kafka.network.SocketServer.createDataPlaneAcceptorsAndProcessors(Sock
etServer.scala:238)
        at kafka.network.SocketServer.startup(SocketServer.scala:121)
        at kafka.server.KafkaServer.startup(KafkaServer.scala:263)
        at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:
44)
        at kafka.Kafka$.main(Kafka.scala:84)
        at kafka.Kafka.main(Kafka.scala)
[2020-03-30 14:30:42,713] INFO [KafkaServer id=0] shutting down (kafka.server.Ka
fkaServer)
[2020-03-30 14:30:42,722] INFO [SocketServer brokerId=0] Stopping socket server
request processors (kafka.network.SocketServer)
[2020-03-30 14:30:42,753] INFO [SocketServer brokerId=0] Stopped socket server r
equest processors (kafka.network.SocketServer)
[2020-03-30 14:30:42,770] INFO Shutting down. (kafka.log.LogManager)
[2020-03-30 14:30:42,846] INFO Shutdown complete. (kafka.log.LogManager)
[2020-03-30 14:30:42,850] INFO [ZooKeeperClient Kafka server] Closing. (kafka.zo
okeeper.ZooKeeperClient)
[2020-03-30 14:30:42,967] INFO Session: 0x100006f57170000 closed (org.apache.zoo
keeper.ZooKeeper)
[2020-03-30 14:30:42,967] INFO EventThread shut down for session: 0x100006f57170
000 (org.apache.zookeeper.ClientCnxn)
[2020-03-30 14:30:42,975] INFO [ZooKeeperClient Kafka server] Closed. (kafka.zoo
keeper.ZooKeeperClient)
[2020-03-30 14:30:42,977] INFO [ThrottledChannelReaper-Fetch]: Shutting down (ka
fka.server.ClientQuotaManager$ThrottledChannelReaper)
[2020-03-30 14:30:43,975] INFO [ThrottledChannelReaper-Fetch]: Shutdown complete
d (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2020-03-30 14:30:43,975] INFO [ThrottledChannelReaper-Fetch]: Stopped (kafka.se
rver.ClientQuotaManager$ThrottledChannelReaper)
[2020-03-30 14:30:43,976] INFO [ThrottledChannelReaper-Produce]: Shutting down (
kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2020-03-30 14:30:44,976] INFO [ThrottledChannelReaper-Produce]: Stopped (kafka.
server.ClientQuotaManager$ThrottledChannelReaper)
[2020-03-30 14:30:44,976] INFO [ThrottledChannelReaper-Produce]: Shutdown comple
ted (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2020-03-30 14:30:44,978] INFO [ThrottledChannelReaper-Request]: Shutting down (
kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2020-03-30 14:30:44,983] INFO [ThrottledChannelReaper-Request]: Stopped (kafka.
server.ClientQuotaManager$ThrottledChannelReaper)
[2020-03-30 14:30:44,983] INFO [ThrottledChannelReaper-Request]: Shutdown comple
ted (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2020-03-30 14:30:44,988] INFO [SocketServer brokerId=0] Shutting down socket se
rver (kafka.network.SocketServer)
[2020-03-30 14:30:45,198] INFO [SocketServer brokerId=0] Shutdown completed (kaf
ka.network.SocketServer)
[2020-03-30 14:30:45,230] INFO [KafkaServer id=0] shut down completed (kafka.ser
ver.KafkaServer)
[2020-03-30 14:30:45,236] ERROR Exiting Kafka. (kafka.server.KafkaServerStartabl
e)
[2020-03-30 14:30:45,249] INFO [KafkaServer id=0] shutting down (kafka.server.Ka
fkaServer)

C:\ApacheKafka\kafka_2.13-2.4.1>

Редактировать 1: Я обнаружил, что export используется внутри unix там я заменил экспорт на set. Вот моя новая команда запуска сервера kafka в командном файле, дважды щелкните и запустите:

set KAFKA_OPTS="-Djava.security.auth.login.config=config/kafka_server_jaas.conf"
start bin/kafka-server-start.sh config/server.properties

, но она открывается git bash cmd, и через некоторое время показывает "C:\Apachekafka\Kafka_2.13-2.41/bind/kafka-run-class.sh: line 309: C:\Program: no such file or directory

...