У меня есть кластер kubernetes, работающий на нескольких локальных (чистых металлических / физических) машинах. Я хочу развернуть kafka в кластере, но не могу понять, как использовать стримзи в моей конфигурации.
Я пытался следовать руководству на странице быстрого запуска: https://strimzi.io/docs/quickstart/master/
Получил мои стручки зоопарка в ожидании в точке 2.4. Creating a cluster
:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling <unknown> default-scheduler pod has unbound immediate PersistentVolumeClaims
Warning FailedScheduling <unknown> default-scheduler pod has unbound immediate PersistentVolumeClaims
Я обычно использую hostpath
для своих томов, я не знаю, что происходит с этим ...
РЕДАКТИРОВАТЬ : Я пытался создать StorageClass, используя команды Arghya Sadhu, но проблема все еще там.
Описание моего PV C:
kubectl describe -n my-kafka-project persistentvolumeclaim/data-my-cluster-zookeeper-0
Name: data-my-cluster-zookeeper-0
Namespace: my-kafka-project
StorageClass: local-storage
Status: Pending
Volume:
Labels: app.kubernetes.io/instance=my-cluster
app.kubernetes.io/managed-by=strimzi-cluster-operator
app.kubernetes.io/name=strimzi
strimzi.io/cluster=my-cluster
strimzi.io/kind=Kafka
strimzi.io/name=my-cluster-zookeeper
Annotations: strimzi.io/delete-claim: false
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Mounted By: my-cluster-zookeeper-0
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal WaitForFirstConsumer 72s (x66 over 16m) persistentvolume-controller waiting for first consumer to be created before binding
И мой стручок:
kubectl describe -n my-kafka-project pod/my-cluster-zookeeper-0
Name: my-cluster-zookeeper-0
Namespace: my-kafka-project
Priority: 0
Node: <none>
Labels: app.kubernetes.io/instance=my-cluster
app.kubernetes.io/managed-by=strimzi-cluster-operator
app.kubernetes.io/name=strimzi
controller-revision-hash=my-cluster-zookeeper-7f698cf9b5
statefulset.kubernetes.io/pod-name=my-cluster-zookeeper-0
strimzi.io/cluster=my-cluster
strimzi.io/kind=Kafka
strimzi.io/name=my-cluster-zookeeper
Annotations: strimzi.io/cluster-ca-cert-generation: 0
strimzi.io/generation: 0
Status: Pending
IP:
IPs: <none>
Controlled By: StatefulSet/my-cluster-zookeeper
Containers:
zookeeper:
Image: strimzi/kafka:0.15.0-kafka-2.3.1
Port: <none>
Host Port: <none>
Command:
/opt/kafka/zookeeper_run.sh
Liveness: exec [/opt/kafka/zookeeper_healthcheck.sh] delay=15s timeout=5s period=10s #success=1 #failure=3
Readiness: exec [/opt/kafka/zookeeper_healthcheck.sh] delay=15s timeout=5s period=10s #success=1 #failure=3
Environment:
ZOOKEEPER_NODE_COUNT: 1
ZOOKEEPER_METRICS_ENABLED: false
STRIMZI_KAFKA_GC_LOG_ENABLED: false
KAFKA_HEAP_OPTS: -Xms128M
ZOOKEEPER_CONFIGURATION: autopurge.purgeInterval=1
tickTime=2000
initLimit=5
syncLimit=2
Mounts:
/opt/kafka/custom-config/ from zookeeper-metrics-and-logging (rw)
/var/lib/zookeeper from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from my-cluster-zookeeper-token-hgk2b (ro)
tls-sidecar:
Image: strimzi/kafka:0.15.0-kafka-2.3.1
Ports: 2888/TCP, 3888/TCP, 2181/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP
Command:
/opt/stunnel/zookeeper_stunnel_run.sh
Liveness: exec [/opt/stunnel/stunnel_healthcheck.sh 2181] delay=15s timeout=5s period=10s #success=1 #failure=3
Readiness: exec [/opt/stunnel/stunnel_healthcheck.sh 2181] delay=15s timeout=5s period=10s #success=1 #failure=3
Environment:
ZOOKEEPER_NODE_COUNT: 1
TLS_SIDECAR_LOG_LEVEL: notice
Mounts:
/etc/tls-sidecar/cluster-ca-certs/ from cluster-ca-certs (rw)
/etc/tls-sidecar/zookeeper-nodes/ from zookeeper-nodes (rw)
/var/run/secrets/kubernetes.io/serviceaccount from my-cluster-zookeeper-token-hgk2b (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-my-cluster-zookeeper-0
ReadOnly: false
zookeeper-metrics-and-logging:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: my-cluster-zookeeper-config
Optional: false
zookeeper-nodes:
Type: Secret (a volume populated by a Secret)
SecretName: my-cluster-zookeeper-nodes
Optional: false
cluster-ca-certs:
Type: Secret (a volume populated by a Secret)
SecretName: my-cluster-cluster-ca-cert
Optional: false
my-cluster-zookeeper-token-hgk2b:
Type: Secret (a volume populated by a Secret)
SecretName: my-cluster-zookeeper-token-hgk2b
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling <unknown> default-scheduler 0/1 nodes are available: 1 node(s) didn't find available persistent volumes to bind.
Warning FailedScheduling <unknown> default-scheduler 0/1 nodes are available: 1 node(s) didn't find available persistent volumes to bind.