Kubernets 1.18. Сложенная плоскость управления и узлы etcd не могут добавить второй узел ETCD. - PullRequest
3 голосов
/ 06 апреля 2020

Я пытаюсь настроить 3 узла Kubernets 1.18 на CentOS 8 с Containerd. Следуя документу с накопленной плоскостью управления и узлами etcd (https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/), я смог успешно настроить основной мастер с помощью Calico CNI.

При добавлении второго узла плоскости управления при добавлении Второй шаг членства в ETCD - это сбой основного контейнера ETCD из-за всего этого кластера. не уверен, почему он не может добавить второго члена ETCD. брандмауэр отключен на моих хостах

Вот моя конфигурация

- kube-cp-1.com, 10.10.1.1
- kube-cp-2.com, 10.10.1.2
- kube-cp-3.com, 10.10.1.3
- lb.kube-cp.com, 10.10.1.4

kubeadm-config.yaml

---
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 10.10.1.1
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///run/containerd/containerd.sock
  name: kube-cp-1.com
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
    extraArgs:
      listen-client-urls: "https://127.0.0.1:2379,https://10.10.1.1:2379"
      advertise-client-urls: "https://10.10.1.1:2379"
      listen-peer-urls: "https://10.10.1.1:2380"
      initial-advertise-peer-urls: "https://10.10.1.1:2380"
      initial-cluster: "kube-cp-1.com=https://10.10.1.1:2380"
    serverCertSANs:
    - kube-cp-1.com
    - kube-cp-2.com
    - kube-cp-3.com
    - localhost
    - 127.0.0.1
    - 10.10.1.1
    - 10.10.1.2
    - 110.10.1.3
    - 10.10.1.1
    - lb.kube-cp.com
    peerCertSANs:
    - kube-cp-1.com
    - kube-cp-2.com
    - kube-cp-3.com
    - localhost
    - 127.0.0.1
    - 10.10.1.1
    - 10.10.1.2
    - 110.10.1.3
    - 10.10.1.1
    - lb.kube-cp.com
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: stable
apiServerCertSANs:
- "lb.kube-cp.com"
controlPlaneEndpoint: "10.10.1.1:6443"
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.236.0.0/12
  podSubnet: 10.236.0.0/16
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"

инициатор первого мастера

kubeadm init --upload-certs --config k8s-nprd.kubeadm-init.yaml

добавление второго главного узла

   kubeadm join 10.10.1.1:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:4823caf8f50f531ba1bd7ee6681411cfac923ead603a805f33a3a667fcfb62a4 \
    --control-plane --certificate-key a3005aca06076d93233becae71c600a34fa914aefa9e360c3f8b64092e1c43e5 --cri-socket /run/containerd/containerd.sock

сообщение от соединения kubeadm

I0406 10:25:45.903249    6984 manifests.go:91] [control-plane] getting StaticPodSpecs
W0406 10:25:45.903292    6984 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
I0406 10:25:45.903473    6984 manifests.go:104] [control-plane] adding volume "kubeconfig" for component "kube-scheduler"
I0406 10:25:45.903941    6984 manifests.go:121] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[check-etcd] Checking that the etcd cluster is healthy
I0406 10:25:45.904727    6984 local.go:78] [etcd] Checking etcd cluster health
I0406 10:25:45.904745    6984 local.go:81] creating etcd client that connects to etcd pods
I0406 10:25:45.904756    6984 etcd.go:178] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods
I0406 10:25:45.912390    6984 etcd.go:102] etcd endpoints read from pods: https://10.10.1.1:2379
I0406 10:25:45.924703    6984 etcd.go:250] etcd endpoints read from etcd: https://10.10.1.1:2379
I0406 10:25:45.924732    6984 etcd.go:120] update etcd endpoints: https://10.10.1.1:2379
I0406 10:25:45.938129    6984 kubelet.go:111] [kubelet-start] writing bootstrap kubelet config file at /etc/kubernetes/bootstrap-kubelet.conf
I0406 10:25:45.940638    6984 kubelet.go:145] [kubelet-start] Checking for an existing Node in the cluster with name "kube-cp-2.com" and status "Ready"
I0406 10:25:45.942529    6984 kubelet.go:159] [kubelet-start] Stopping the kubelet
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
I0406 10:25:46.597353    6984 cert_rotation.go:137] Starting client certificate rotation controller
I0406 10:25:46.599553    6984 kubelet.go:194] [kubelet-start] preserving the crisocket information for the node
I0406 10:25:46.599572    6984 patchnode.go:30] [patchnode] Uploading the CRI Socket information "/run/containerd/containerd.sock" to the Node API object "kube-cp-2.com" as an annotation
I0406 10:26:01.608756    6984 local.go:130] creating etcd client that connects to etcd pods
I0406 10:26:01.608782    6984 etcd.go:178] retrieving etcd endpoints from "kubeadm.kubernetes.io/etcd.advertise-client-urls" annotation in etcd Pods
I0406 10:26:01.613158    6984 etcd.go:102] etcd endpoints read from pods: https://10.10.1.1:2379
I0406 10:26:01.621527    6984 etcd.go:250] etcd endpoints read from etcd: https://10.10.1.1:2379
I0406 10:26:01.621569    6984 etcd.go:120] update etcd endpoints: https://10.10.1.1:2379
I0406 10:26:01.621577    6984 local.go:139] Adding etcd member: https://10.10.1.2:2380
[etcd] Announced new etcd member joining to the existing etcd cluster
I0406 10:26:01.631714    6984 local.go:145] Updated etcd member list: [{kube-cp-2.com https://10.10.1.2:2380} {kube-cp-1.com https://10.10.1.1:2380}]
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
I0406 10:26:01.632669    6984 etcd.go:500] [etcd] attempting to see if all cluster endpoints ([https://10.10.1.1:2379 https://10.10.1.2:2379]) are available 1/8
[kubelet-check] Initial timeout of 40s passed.
I0406 10:26:41.650088    6984 etcd.go:480] Failed to get etcd status for https://10.10.1.2:2379: failed to dial endpoint https://10.10.1.2:2379 with maintenance client: context deadline exceeded

Первичное сообщение журнала ETCD при добавлении второго узла.

crictl logs -f b127c56d13d5f                                                                                                        
[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2020-04-06 14:05:10.587582 I | etcdmain: etcd Version: 3.4.3
2020-04-06 14:05:10.587641 I | etcdmain: Git SHA: 3cf2f69b5
2020-04-06 14:05:10.587646 I | etcdmain: Go Version: go1.12.12
2020-04-06 14:05:10.587648 I | etcdmain: Go OS/Arch: linux/amd64
2020-04-06 14:05:10.587652 I | etcdmain: setting maximum number of CPUs to 4, total number of available CPUs is 4
[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2020-04-06 14:05:10.587713 I | embed: peerTLS: cert = /etc/kubernetes/pki/etcd/peer.crt, key = /etc/kubernetes/pki/etcd/peer.key, trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-cert-au
th = true, crl-file =
2020-04-06 14:05:10.588321 I | embed: name = kube-cp-1.com
2020-04-06 14:05:10.588335 I | embed: data dir = /var/lib/etcd
2020-04-06 14:05:10.588339 I | embed: member dir = /var/lib/etcd/member
2020-04-06 14:05:10.588341 I | embed: heartbeat = 100ms
2020-04-06 14:05:10.588344 I | embed: election = 1000ms
2020-04-06 14:05:10.588347 I | embed: snapshot count = 10000
2020-04-06 14:05:10.588353 I | embed: advertise client URLs = https://10.10.1.1:2379
2020-04-06 14:05:10.595691 I | etcdserver: starting member 9fe7e24231cce76d in cluster bd17ed771bd8406b
raft2020/04/06 14:05:10 INFO: 9fe7e24231cce76d switched to configuration voters=()
raft2020/04/06 14:05:10 INFO: 9fe7e24231cce76d became follower at term 0
raft2020/04/06 14:05:10 INFO: newRaft 9fe7e24231cce76d [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
raft2020/04/06 14:05:10 INFO: 9fe7e24231cce76d became follower at term 1
raft2020/04/06 14:05:10 INFO: 9fe7e24231cce76d switched to configuration voters=(11522426945581934445)
2020-04-06 14:05:10.606487 W | auth: simple token is not cryptographically signed
2020-04-06 14:05:10.613683 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
2020-04-06 14:05:10.614928 I | etcdserver: 9fe7e24231cce76d as single-node; fast-forwarding 9 ticks (election ticks 10)
raft2020/04/06 14:05:10 INFO: 9fe7e24231cce76d switched to configuration voters=(11522426945581934445)
2020-04-06 14:05:10.615341 I | etcdserver/membership: added member 9fe7e24231cce76d [https://10.10.1.1:2380] to cluster bd17ed771bd8406b
2020-04-06 14:05:10.616288 I | embed: ClientTLS: cert = /etc/kubernetes/pki/etcd/server.crt, key = /etc/kubernetes/pki/etcd/server.key, trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-c
ert-auth = true, crl-file =
2020-04-06 14:05:10.616414 I | embed: listening for metrics on http://127.0.0.1:2381
2020-04-06 14:05:10.616544 I | embed: listening for peers on 10.10.1.1:2380
raft2020/04/06 14:05:10 INFO: 9fe7e24231cce76d is starting a new election at term 1
raft2020/04/06 14:05:10 INFO: 9fe7e24231cce76d became candidate at term 2
raft2020/04/06 14:05:10 INFO: 9fe7e24231cce76d received MsgVoteResp from 9fe7e24231cce76d at term 2
raft2020/04/06 14:05:10 INFO: 9fe7e24231cce76d became leader at term 2
raft2020/04/06 14:05:10 INFO: raft.node: 9fe7e24231cce76d elected leader 9fe7e24231cce76d at term 2
2020-04-06 14:05:10.798941 I | etcdserver: setting up the initial cluster version to 3.4
2020-04-06 14:05:10.799837 N | etcdserver/membership: set the initial cluster version to 3.4
2020-04-06 14:05:10.799882 I | etcdserver/api: enabled capabilities for version 3.4
2020-04-06 14:05:10.799904 I | etcdserver: published {Name:kube-cp-1.com ClientURLs:[https://10.10.1.1:2379]} to cluster bd17ed771bd8406b
2020-04-06 14:05:10.800014 I | embed: ready to serve client requests


raft2020/04/06 14:26:01 INFO: 9fe7e24231cce76d switched to configuration voters=(11306080513102511778 11522426945581934445)
2020-04-06 14:26:01.629134 I | etcdserver/membership: added member 9ce744531170fea2 [https://10.10.1.2:2380] to cluster bd17ed771bd8406b
2020-04-06 14:26:01.629159 I | rafthttp: starting peer 9ce744531170fea2...
2020-04-06 14:26:01.629184 I | rafthttp: started HTTP pipelining with peer 9ce744531170fea2
2020-04-06 14:26:01.630090 I | rafthttp: started streaming with peer 9ce744531170fea2 (writer)
2020-04-06 14:26:01.630325 I | rafthttp: started streaming with peer 9ce744531170fea2 (writer)
2020-04-06 14:26:01.631552 I | rafthttp: started peer 9ce744531170fea2
2020-04-06 14:26:01.631581 I | rafthttp: added peer 9ce744531170fea2
2020-04-06 14:26:01.631594 I | rafthttp: started streaming with peer 9ce744531170fea2 (stream MsgApp v2 reader)
2020-04-06 14:26:01.631826 I | rafthttp: started streaming with peer 9ce744531170fea2 (stream Message reader)
2020-04-06 14:26:02.849514 W | etcdserver: failed to reach the peerURL(https://10.10.1.2:2380) of member 9ce744531170fea2 (Get https://10.10.1.2:2380/version: dial tcp 192.168.80.1
30:2380: connect: connection refused)
2020-04-06 14:26:02.849541 W | etcdserver: cannot get the version of member 9ce744531170fea2 (Get https://10.10.1.2:2380/version: dial tcp 10.10.1.2:2380: connect: connection refus
ed)
raft2020/04/06 14:26:02 WARN: 9fe7e24231cce76d stepped down to follower since quorum is not active
raft2020/04/06 14:26:02 INFO: 9fe7e24231cce76d became follower at term 2
raft2020/04/06 14:26:02 INFO: raft.node: 9fe7e24231cce76d lost leader 9fe7e24231cce76d at term 2
raft2020/04/06 14:26:04 INFO: 9fe7e24231cce76d is starting a new election at term 2
raft2020/04/06 14:26:04 INFO: 9fe7e24231cce76d became candidate at term 3
raft2020/04/06 14:26:04 INFO: 9fe7e24231cce76d received MsgVoteResp from 9fe7e24231cce76d at term 3
raft2020/04/06 14:26:04 INFO: 9fe7e24231cce76d [logterm: 2, index: 3741] sent MsgVote request to 9ce744531170fea2 at term 3
raft2020/04/06 14:26:06 INFO: 9fe7e24231cce76d is starting a new election at term 3
raft2020/04/06 14:26:06 INFO: 9fe7e24231cce76d became candidate at term 4
raft2020/04/06 14:26:06 INFO: 9fe7e24231cce76d received MsgVoteResp from 9fe7e24231cce76d at term 4
raft2020/04/06 14:26:06 INFO: 9fe7e24231cce76d [logterm: 2, index: 3741] sent MsgVote request to 9ce744531170fea2 at term 4
2020-04-06 14:26:06.631923 W | rafthttp: health check for peer 9ce744531170fea2 could not connect: dial tcp 10.10.1.2:2380: connect: connection refused
2020-04-06 14:26:06.632008 W | rafthttp: health check for peer 9ce744531170fea2 could not connect: dial tcp 10.10.1.2:2380: connect: connection refused
raft2020/04/06 14:26:07 INFO: 9fe7e24231cce76d is starting a new election at term 4
raft2020/04/06 14:26:07 INFO: 9fe7e24231cce76d became candidate at term 5
raft2020/04/06 14:26:07 INFO: 9fe7e24231cce76d received MsgVoteResp from 9fe7e24231cce76d at term 5
raft2020/04/06 14:26:07 INFO: 9fe7e24231cce76d [logterm: 2, index: 3741] sent MsgVote request to 9ce744531170fea2 at term 5
raft2020/04/06 14:26:08 INFO: 9fe7e24231cce76d is starting a new election at term 5
raft2020/04/06 14:26:08 INFO: 9fe7e24231cce76d became candidate at term 6


2020-04-06 14:27:11.684519 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/kube-scheduler-kube-cp-2.com.1603412b3ca5e3ea\" " with result "error:context canceled" took too long (7.013696732s) to execute
WARNING: 2020/04/06 14:27:11 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
WARNING: 2020/04/06 14:27:11 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
2020-04-06 14:27:11.684604 W | etcdserver: read-only range request "key:\"/registry/leases/kube-node-lease/kube-cp-2.com\" " with result "error:context canceled" took too long (6.216330254s) to
execute
WARNING: 2020/04/06 14:27:11 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
raft2020/04/06 14:27:12 INFO: 9fe7e24231cce76d is starting a new election at term 48
raft2020/04/06 14:27:12 INFO: 9fe7e24231cce76d became candidate at term 49
raft2020/04/06 14:27:12 INFO: 9fe7e24231cce76d received MsgVoteResp from 9fe7e24231cce76d at term 49
raft2020/04/06 14:27:12 INFO: 9fe7e24231cce76d [logterm: 2, index: 3741] sent MsgVote request to 9ce744531170fea2 at term 49
2020-04-06 14:27:12.632989 N | pkg/osutil: received terminated signal, shutting down...
2020-04-06 14:27:12.633468 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "error:context canceled" took too long (7.957912936s) to execute
WARNING: 2020/04/06 14:27:12 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
2020-04-06 14:27:12.633992 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "error:context canceled" took too long (1.649430193s) to execute
WARNING: 2020/04/06 14:27:12 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
WARNING: 2020/04/06 14:27:12 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
2020-04-06 14:27:12.637645 W | etcdserver: read-only range request "key:\"/registry/crd.projectcalico.org/ippools\" range_end:\"/registry/crd.projectcalico.org/ippoolt\" count_only:true " with result "error:context canceled" took too long (6.174043444s) to execute
WARNING: 2020/04/06 14:27:12 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
2020-04-06 14:27:12.637888 W | etcdserver: read-only range request "key:\"/registry/crd.projectcalico.org/ipamconfigs\" range_end:\"/registry/crd.projectcalico.org/ipamconfigt\" count_only:true " with result "error:context canceled" took too long (7.539908265s) to execute
WARNING: 2020/04/06 14:27:12 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
2020-04-06 14:27:12.638007 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/kube-scheduler\" " with result "error:context canceled" took too long (1.967145665s) to execute
WARNING: 2020/04/06 14:27:12 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
2020-04-06 14:27:12.638271 W | etcdserver: read-only range request "key:\"/registry/pods\" range_end:\"/registry/podt\" count_only:true " with result "error:context canceled" took too long (1.809718334s) to execute
WARNING: 2020/04/06 14:27:12 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
2020-04-06 14:27:12.638423 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/kube-controller-manager\" " with result "error:context canceled" took too long (1.963396181s) to execute
2020-04-06 14:27:12.638433 W | etcdserver: read-only range request "key:\"/registry/horizontalpodautoscalers\" range_end:\"/registry/horizontalpodautoscalert\" count_only:true " with result
"error:context canceled" took too long (6.779544473s) to execute
WARNING: 2020/04/06 14:27:12 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
2020-04-06 14:27:12.638462 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-controller-manager-kube-cp-1.com\" " with result "error:context canceled" took too long
(970.539525ms) to execute
WARNING: 2020/04/06 14:27:12 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
WARNING: 2020/04/06 14:27:12 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
2020-04-06 14:27:12.639866 W | etcdserver: read-only range request "key:\"/registry/crd.projectcalico.org/clusterinformations/default\" " with result "error:context canceled" took too long (2.965996315s) to execute
WARNING: 2020/04/06 14:27:12 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
2020-04-06 14:27:12.640009 W | etcdserver: read-only range request "key:\"/registry/leases/kube-node-lease/kube-cp-1.com\" " with result "error:context canceled" took too long (566.004502ms) to
execute
WARNING: 2020/04/06 14:27:12 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
WARNING: 2020/04/06 14:27:12 grpc: addrConn.createTransport failed to connect to {10.10.1.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 10.10.1.1:2379: connect: connection refused". Reconnecting...
2020-04-06 14:27:12.647096 I | etcdserver: skipped leadership transfer for stopping non-leader member
2020-04-06 14:27:12.647188 I | rafthttp: stopping peer 9ce744531170fea2...
2020-04-06 14:27:12.647201 I | rafthttp: stopped streaming with peer 9ce744531170fea2 (writer)
2020-04-06 14:27:12.647209 I | rafthttp: stopped streaming with peer 9ce744531170fea2 (writer)
2020-04-06 14:27:12.647228 I | rafthttp: stopped HTTP pipelining with peer 9ce744531170fea2
2020-04-06 14:27:12.647238 I | rafthttp: stopped streaming with peer 9ce744531170fea2 (stream MsgApp v2 reader)
2020-04-06 14:27:12.647248 I | rafthttp: stopped streaming with peer 9ce744531170fea2 (stream Message reader)
2020-04-06 14:27:12.647260 I | rafthttp: stopped peer 9ce744531170fea2

Любая помощь по добавлению узла secondday etcd?

Спасибо SR

1 Ответ

3 голосов
/ 12 апреля 2020

Я столкнулся с аналогичной проблемой с v1.18.1, и root вызвал ее тем, что на моих хостах уровня управления было неправильно настроено значение MTU для сетевого интерфейса. Это приводило к истечению времени ожидания взаимодействий модуля etcdserver между первым и последующими узлами плоскости управления, поскольку некоторые сетевые пакеты незаметно отбрасывались.

Исправление MTU позволило мне завершить настройку плоскости управления, как объявлено.

Подробности для моей настройки:

В моем случае я использовал виртуальные машины KVM (запущенные с использованием LXD) в качестве хостов плоскости управления. Из-за ошибки неверной конфигурации DHCP хосты не получали надлежащий MTU, поэтому оставалось значение по умолчанию 1500 ..., что приводило к проблемам с оверлейной сетью между хостами. Уменьшение MTU до 1450 решило проблему.

Если вас интересуют дополнительные сведения о том, почему неверная конфигурация MTU проявляется таким образом, я считаю полезным следующее обсуждение проблемы Project Calico: https://github.com/projectcalico/calico/issues/1709

...