Minikube pod постоянно находится в состоянии ожидания и не может быть запланировано - PullRequest
0 голосов
/ 18 марта 2020

Я очень новичок в kubernetes. Извините, если это глупый вопрос.

Я использую minikube и kvm2 (5.0.0). Вот информация о minikube и kubectl версии

Вывод статуса Minikube

host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

kubectl cluster-info вывод:

Kubernetes master is running at https://127.0.0.1:32768
KubeDNS is running at https://127.0.0.1:32768/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

Я пытаюсь развернуть стручок, используя kubectl apply -f client-pod.yaml. Вот моя client-pod.yaml конфигурация

apiVersion: v1
kind: Pod
metadata:
  name: client-pod
  labels:
    component: web
spec:
  containers:
    - name: client
      image: stephengrider/multi-client
      ports:
        - containerPort: 3000

Это kubectl get pods вывод:

NAME         READY   STATUS    RESTARTS   AGE
client-pod   0/1     Pending   0          4m15s

kubectl describe pods вывод:

Name:         client-pod
Namespace:    default
Priority:     0
Node:         <none>
Labels:       component=web
Annotations:  kubectl.kubernetes.io/last-applied-configuration:
                {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"labels":{"component":"web"},"name":"client-pod","namespace":"default"},"spec...
Status:       Pending
IP:           
IPs:          <none>
Containers:
  client:
    Image:        stephengrider/multi-client
    Port:         3000/TCP
    Host Port:    0/TCP
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-z45bq (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  default-token-z45bq:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-z45bq
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age        From               Message
  ----     ------            ----       ----               -------
  Warning  FailedScheduling  <unknown>  default-scheduler  0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
  Warning  FailedScheduling  <unknown>  default-scheduler  0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.

Я был В поисках способа увидеть, какие вредные вещества останавливаются в pod для инициализации без удачи.

Есть ли способ увидеть сбой, который портит?

kubectl get nodes output:

NAME   STATUS   ROLES    AGE   VERSION
m01    Ready    master   11h   v1.17.3

- РЕДАКТИРОВАТЬ -

kubectl describe nodes вывод:

Name:               home-pc
Roles:              master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=home-pc
                    kubernetes.io/os=linux
                    minikube.k8s.io/commit=eb13446e786c9ef70cb0a9f85a633194e62396a1
                    minikube.k8s.io/name=minikube
                    minikube.k8s.io/updated_at=2020_03_17T22_51_28_0700
                    minikube.k8s.io/version=v1.8.2
                    node-role.kubernetes.io/master=
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Tue, 17 Mar 2020 22:51:25 -0500
Taints:             <none>
Unschedulable:      false
Lease:
  HolderIdentity:  home-pc
  AcquireTime:     <unset>
  RenewTime:       Tue, 17 Mar 2020 22:51:41 -0500
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Tue, 17 Mar 2020 22:51:41 -0500   Tue, 17 Mar 2020 22:51:21 -0500   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Tue, 17 Mar 2020 22:51:41 -0500   Tue, 17 Mar 2020 22:51:21 -0500   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Tue, 17 Mar 2020 22:51:41 -0500   Tue, 17 Mar 2020 22:51:21 -0500   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Tue, 17 Mar 2020 22:51:41 -0500   Tue, 17 Mar 2020 22:51:41 -0500   KubeletReady                 kubelet is posting ready status. AppArmor enabled
Addresses:
  InternalIP:  192.168.0.12
  Hostname:    home-pc
Capacity:
  cpu:                12
  ephemeral-storage:  227688908Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             8159952Ki
  pods:               110
Allocatable:
  cpu:                12
  ephemeral-storage:  209838097266
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             8057552Ki
  pods:               110
System Info:
  Machine ID:                 339d426453b4492da92f75d06acc1e0d
  System UUID:                62eedb55-444f-61ce-75e9-b06ebf3331a0
  Boot ID:                    a9ae9889-d7cb-48c5-ae75-b2052292ac7a
  Kernel Version:             5.0.0-38-generic
  OS Image:                   Ubuntu 19.04
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  docker://19.3.5
  Kubelet Version:            v1.17.3
  Kube-Proxy Version:         v1.17.3
Non-terminated Pods:          (7 in total)
  Namespace                   Name                               CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                   ----                               ------------  ----------  ---------------  -------------  ---
  kube-system                 coredns-6955765f44-mbwqt           100m (0%)     0 (0%)      70Mi (0%)        170Mi (2%)     10s
  kube-system                 coredns-6955765f44-sblf2           100m (0%)     0 (0%)      70Mi (0%)        170Mi (2%)     10s
  kube-system                 etcd-home-pc                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         13s
  kube-system                 kube-apiserver-home-pc             250m (2%)     0 (0%)      0 (0%)           0 (0%)         13s
  kube-system                 kube-controller-manager-home-pc    200m (1%)     0 (0%)      0 (0%)           0 (0%)         13s
  kube-system                 kube-proxy-lk7xs                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
  kube-system                 kube-scheduler-home-pc             100m (0%)     0 (0%)      0 (0%)           0 (0%)         12s
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests    Limits
  --------           --------    ------
  cpu                750m (6%)   0 (0%)
  memory             140Mi (1%)  340Mi (4%)
  ephemeral-storage  0 (0%)      0 (0%)
Events:
  Type    Reason                   Age                From                 Message
  ----    ------                   ----               ----                 -------
  Normal  Starting                 24s                kubelet, home-pc     Starting kubelet.
  Normal  NodeHasSufficientMemory  23s (x4 over 24s)  kubelet, home-pc     Node home-pc status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    23s (x3 over 24s)  kubelet, home-pc     Node home-pc status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     23s (x3 over 24s)  kubelet, home-pc     Node home-pc status is now: NodeHasSufficientPID
  Normal  NodeAllocatableEnforced  23s                kubelet, home-pc     Updated Node Allocatable limit across pods
  Normal  Starting                 13s                kubelet, home-pc     Starting kubelet.
  Normal  NodeHasSufficientMemory  13s                kubelet, home-pc     Node home-pc status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    13s                kubelet, home-pc     Node home-pc status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     13s                kubelet, home-pc     Node home-pc status is now: NodeHasSufficientPID
  Normal  NodeAllocatableEnforced  13s                kubelet, home-pc     Updated Node Allocatable limit across pods
  Normal  Starting                 9s                 kube-proxy, home-pc  Starting kube-proxy.
  Normal  NodeReady                3s                 kubelet, home-pc     Node home-pc status is now: NodeReady

1 Ответ

2 голосов
/ 18 марта 2020

У вас есть некоторые пятна на узле, который мешает планировщику развернуть модуль. Либо удалите заражение из главного узла, либо добавьте допуски в модуле c.

...