Kubernetes weave- net показывает статус - CrashLoopBackOff - PullRequest
0 голосов
/ 05 августа 2020

Среда: (локально): -

K8s-Master Server: Standalone Physical server.
OS - CentOS Linux release 8.2.2004 (Core)
K8s Client Version: v1.18.6
K8s Server Version: v1.18.6
K8S-Worker node OS – CentOS Linux release 8.2.2004 (Core) - Standalone Physical server.

Я установил Kubernets в соответствии с приведенными ниже инструкциями по ссылке.

https://www.tecmint.com/install-a-kubernetes-cluster-on-centos-8/

Но weave- net пространство имен показывает - CrashLoopBackOff

[root@K8S-Master ~]# kubectl get pods -o wide --all-namespaces

NAMESPACE     NAME                                  READY   STATUS             RESTARTS   AGE     IP                NODE           NOMINATED NODE   READINESS GATES
default       test-pod                              1/1     Running            0          156m    10.88.0.83        k8s-worker-1   <none>           <none>
kube-system   coredns-66bff467f8-99dww              1/1     Running            1          2d23h   10.88.0.5         K8S-Master     <none>           <none>
kube-system   coredns-66bff467f8-ghk5g              1/1     Running            2          2d23h   10.88.0.6         K8S-Master     <none>           <none>
kube-system   etcd-K8S-Master                       1/1     Running            1          2d23h   100.101.102.103   K8S-Master     <none>           <none>
kube-system   kube-apiserver-K8S-Master             1/1     Running            1          2d23h   100.101.102.103   K8S-Master     <none>           <none>
kube-system   kube-controller-manager-K8S-Master    1/1     Running            1          2d23h   100.101.102.103   K8S-Master     <none>           <none>
kube-system   kube-proxy-btgqb                      1/1     Running            1          2d23h   100.101.102.103   K8S-Master     <none>           <none>
kube-system   kube-proxy-mqg85                      1/1     Running            1          2d23h   100.101.102.104   k8s-worker-1   <none>          <none>
kube-system   kube-scheduler-K8S-Master             1/1     Running            2          2d23h   100.101.102.103   K8S-Master     <none>           <none>
kube-system   weave-net-2nxpk                       1/2     CrashLoopBackOff   848        2d23h   100.101.102.104   k8s-worker-1   <none>           <none>
kube-system   weave-net-n8wv9                       1/2     CrashLoopBackOff   846        2d23h   100.101.102.103   K8S-Master     <none>           <none>


[root@K8S-Master ~]# kubectl logs weave-net-2nxpk -c weave --namespace=kube-system
ipset v7.2: Set cannot be destroyed: it is in use by a kernel component

[root@K8S-Master ~]# kubectl logs weave-net-n8wv9 -c weave --namespace=kube-system
ipset v7.2: Set cannot be destroyed: it is in use by a kernel component


[root@K8S-Master ~]# kubectl describe pod/weave-net-n8wv9 -n kube-system
Name:                 weave-net-n8wv9
Namespace:            kube-system
Priority:             2000001000
Priority Class Name:  system-node-critical
Node:                 K8S-Master/100.101.102.103
Start Time:           Mon, 03 Aug 2020 10:56:12 +0530
Labels:               controller-revision-hash=6768fc7ccf
                      name=weave-net
                      pod-template-generation=1
Annotations:          <none>
Status:               Running
IP:                   100.101.102.103
IPs:
  IP:           100.101.102.103
Controlled By:  DaemonSet/weave-net
Containers:
  weave:
    Container ID:  docker://efeb277639ac8262c8864f2d598606e19caadbb65cdda4645d67589eab13d109
    Image:         docker.io/weaveworks/weave-kube:2.6.5
    Image ID:      docker-pullable://weaveworks/weave-kube@sha256:703a045a58377cb04bc85d0f5a7c93356d5490282accd7e5b5d7a99fe2ef09e2
    Port:          <none>
    Host Port:     <none>
    Command:
      /home/weave/launch.sh
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Thu, 06 Aug 2020 20:18:21 +0530
      Finished:     Thu, 06 Aug 2020 20:18:21 +0530
    Ready:          False
    Restart Count:  971
    Requests:
      cpu:      10m
    Readiness:  http-get http://127.0.0.1:6784/status delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:
      HOSTNAME:   (v1:spec.nodeName)
    Mounts:
      /host/etc from cni-conf (rw)
      /host/home from cni-bin2 (rw)
      /host/opt from cni-bin (rw)
      /host/var/lib/dbus from dbus (rw)
      /lib/modules from lib-modules (rw)
      /run/xtables.lock from xtables-lock (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from weave-net-token-p9ltl (ro)
      /weavedb from weavedb (rw)
  weave-npc:
    Container ID:   docker://33f9fed68c452187490e261830a283c1ec9361aba01b86b60598dcc871ca1b11
    Image:          docker.io/weaveworks/weave-npc:2.6.5
    Image ID:       docker-pullable://weaveworks/weave-npc@sha256:0f6166e000faa500ccc0df53caae17edd3110590b7b159007a5ea727cdfb1cef
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Thu, 06 Aug 2020 17:13:35 +0530
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Thu, 06 Aug 2020 11:38:19 +0530
      Finished:     Thu, 06 Aug 2020 17:08:40 +0530
    Ready:          True
    Restart Count:  4
    Requests:
      cpu:  10m
    Environment:
      HOSTNAME:   (v1:spec.nodeName)
    Mounts:
      /run/xtables.lock from xtables-lock (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from weave-net-token-p9ltl (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  weavedb:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/weave
    HostPathType:
  cni-bin:
    Type:          HostPath (bare host directory volume)
    Path:          /opt
    HostPathType:
  cni-bin2:
    Type:          HostPath (bare host directory volume)
    Path:          /home
    HostPathType:
  cni-conf:
    Type:          HostPath (bare host directory volume)
    Path:          /etc
    HostPathType:
  dbus:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/dbus
    HostPathType:
  lib-modules:
    Type:          HostPath (bare host directory volume)
    Path:          /lib/modules
    HostPathType:
  xtables-lock:
    Type:          HostPath (bare host directory volume)
    Path:          /run/xtables.lock
    HostPathType:  FileOrCreate
  weave-net-token-p9ltl:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  weave-net-token-p9ltl
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     :NoSchedule
                 :NoExecute
                 node.kubernetes.io/disk-pressure:NoSchedule
                 node.kubernetes.io/memory-pressure:NoSchedule
                 node.kubernetes.io/network-unavailable:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute
                 node.kubernetes.io/pid-pressure:NoSchedule
                 node.kubernetes.io/unreachable:NoExecute
                 node.kubernetes.io/unschedulable:NoSchedule
Events:
  Type     Reason   Age                    From                  Message
  ----     ------   ----                   ----                  -------
  Warning  BackOff  2m3s (x863 over 3h7m)  kubelet, K8S-Master  Back-off restarting failed container

[root@K8S-Master ~]# journalctl -u kubelet
Aug 06 20:36:56 K8S-Master kubelet[2647]: I0806 20:36:56.549156    2647 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: f873d>
Aug 06 20:36:56 K8S-Master kubelet[2647]: E0806 20:36:56.549515    2647 pod_workers.go:191] Error syncing pod e7511fe6-c60f-4833-bfb6-d59d6e8720e3 ("wea>

Некорректная конфигурация во время установки K8s? Может ли кто-нибудь предоставить обходной путь для решения этой проблемы.

...