Я пытаюсь настроить базовый Kafka с K8s.Однако каждый раз, когда я пытаюсь подключиться из приложения для генерации данных с Kafka к службе Kafka в K8s, я получаю это исключение в журналах Kafka:
2019-02-04 12:11:28 ERROR Sender:235 kafka-producer-network-thread | avro_data - [Producer clientId=avro_data] Uncaught error in kafka producer I/O thread:
java.lang.IllegalStateException: No entry found for connection 1001
at org.apache.kafka.clients.ClusterConnectionStates.nodeState(ClusterConnectionStates.java:330)
at org.apache.kafka.clients.ClusterConnectionStates.disconnected(ClusterConnectionStates.java:134)
at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:921)
at org.apache.kafka.clients.NetworkClient.access$700(NetworkClient.java:67)
at org.apache.kafka.clients.NetworkClient$DefaultMetadataUpdater.maybeUpdate(NetworkClient.java:1086)
at org.apache.kafka.clients.NetworkClient$DefaultMetadataUpdater.maybeUpdate(NetworkClient.java:971)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:533)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:309)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:233)
at java.lang.Thread.run(Thread.java:748
Вот журналы производителя:
[Producer clientId=avro_data] Initialize connection to node 192.168.99.100:32092 (id: -1 rack: null) for sending metadata request
Updated cluster metadata version 2 to Cluster(id = MpP-9JVnQ4a78VTtCzTm3Q, nodes = [kafka-broker-0.kafka-headless.default.svc.cluster.local:9092 (id: 1001 rack: null)], partitions = [Partition(topic = avro_topic, partition = 0, leader = 1001, replicas = [1001], isr = [1001], offlineReplicas = [])], controller = kafka-broker-0.kafka-headless.default.svc.cluster.local:9092 (id: 1001 rack: null))
[Producer clientId=avro_data] Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms.
В чем может быть проблема с установкой Kafka или подключением приложения?
Я пытаюсь подключиться к службе порта узла Kafka:
props.put("bootstrap.servers", "192.168.99.100:32092")
props.put("client.id", "avro_data")
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer")
props.put("value.serializer", "io.confluent.kafka.serializers.KafkaAvroSerializer")
props.put("schema.registry.url", "http://192.168.99.100:32081")
Настройка Kafka выглядит следующим образом:
apiVersion: v1
kind: Service
metadata:
name: kafka-headless
spec:
ports:
- port: 9092
clusterIP: None
selector:
app: kafka
---
apiVersion: v1
kind: Service
metadata:
name: kafka-np
spec:
ports:
- port: 32092
protocol: TCP
targetPort: 9092
nodePort: 32092
selector:
app: kafka
type: NodePort
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
app: kafka
name: kafka-broker
spec:
serviceName: kafka-headless
selector:
matchLabels:
app: kafka
replicas: 1
template:
metadata:
labels:
app: kafka
spec:
containers:
- name: kafka
image: confluentinc/cp-kafka:5.0.1
env:
- name: KAFKA_ZOOKEEPER_CONNECT
value: zookeeper-headless:2181
- name: MINIKUBE_IP
value: 192.168.99.100
- name: KAFKA_ADVERTISED_LISTENERS
value: PLAINTEXT://kafka-broker-0.kafka-headless.default.svc.cluster.local:9092,EXTERNAL://192.168.99.100:32092
- name: KAFKA_LISTENER_SECURITY_PROTOCOL_MAP
value: PLAINTEXT:PLAINTEXT,EXTERNAL:PLAINTEXT
ports:
- containerPort: 9092