Не удается подключиться к модулю, развернутому на других узлах - PullRequest
1 голос
/ 13 июня 2019

У меня есть кластер kubernetes с 2 узлами, главный и один рабочий узлы в AWS, которые я создал с помощью kubeadm (https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/), и я использую Calico Networking.

kubectl get node дает статус рабочего узла как готовый.

Я создал развертывания (для некоторых развертываний используйте селектор в качестве метки workerNode) из главного узла, и я вижу, что на рабочем узле созданы модули для некоторых изразвертывания. Но проблема в том, что я не могу получить доступ к pod ip (kubectl get ep) с главного или с другого узла. Поэтому pod ip доступен только на узле, на котором запущен pod.

Нужно ли выполнять какие-либо настройки для доступа к Pod рабочего узла из главного узла?

Одно замечание состоит в том, что точки, развернутые в masterNode, имеют 192.168.179.101,192.168.179.105, а для рабочего узла - 192.168.97.5,192.168.97.8

Спасибо.

aquilak8suser@ip-172-31-6-149:~$ kubectl get nodes -o wide
NAME              STATUS   ROLES    AGE     VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION    CONTAINER-RUNTIME
ip-172-31-6-149   Ready    master   8d      v1.14.3   172.31.6.149   <none>        Ubuntu 18.04.2 LTS   4.15.0-1040-aws   docker://18.9.6
k8s-workernode1   Ready    <none>   2d14h   v1.14.3   172.31.11.87   <none>        Ubuntu 18.04.2 LTS   4.15.0-1040-aws   docker://18.9.6


aquilak8suser@ip-172-31-6-149:~$ kubectl get --all-namespaces pods -o wide
    NAMESPACE     NAME                                          READY   STATUS    RESTARTS   AGE     IP               NODE              NOMINATED NODE   READINESS GATES
    default       busybox                                       1/1     Running   2          14h     192.168.97.14    k8s-workernode1   <none>           <none>
    default       spring-boot-demo-action-767dc76c9c-jdrkz      1/1     Running   1          19h     192.168.97.17    k8s-workernode1   <none>           <none>
    default       spring-boot-demo-billing-74f7b6f64-6t2jz      1/1     Running   2          18h     192.168.97.15    k8s-workernode1   <none>           <none>
    default       spring-boot-demo-collector-67665bffc6-mhb59   1/1     Running   1          18h     192.168.97.16    k8s-workernode1   <none>           <none>
    default       spring-boot-demo-model-6d96bc89c8-llmh7       1/1     Running   1          18h     192.168.97.18    k8s-workernode1   <none>           <none>
    default       spring-boot-demo-web-7c945ddcdc-9g2tj         1/1     Running   1          19h     192.168.179.67   ip-172-31-6-149   <none>           <none>
    kube-system   calico-kube-controllers-5f454f49dd-75r5w      1/1     Running   5          8d      192.168.179.66   ip-172-31-6-149   <none>           <none>
    kube-system   calico-node-298r4                             0/1     Running   5          8d      172.31.6.149     ip-172-31-6-149   <none>           <none>
    kube-system   calico-node-7vndt                             0/1     Running   2          2d14h   172.31.11.87     k8s-workernode1   <none>           <none>
    kube-system   coredns-fb8b8dccf-6qrl7                       1/1     Running   2          2d14h   192.168.179.70   ip-172-31-6-149   <none>           <none>
    kube-system   coredns-fb8b8dccf-txdz8                       1/1     Running   2          2d14h   192.168.179.71   ip-172-31-6-149   <none>           <none>
    kube-system   etcd-ip-172-31-6-149                          1/1     Running   2          2d14h   172.31.6.149     ip-172-31-6-149   <none>           <none>
    kube-system   kube-apiserver-ip-172-31-6-149                1/1     Running   2          2d14h   172.31.6.149     ip-172-31-6-149   <none>           <none>
    kube-system   kube-controller-manager-ip-172-31-6-149       1/1     Running   2          2d14h   172.31.6.149     ip-172-31-6-149   <none>           <none>
    kube-system   kube-proxy-f2rdm                              1/1     Running   2          2d14h   172.31.6.149     ip-172-31-6-149   <none>           <none>
    kube-system   kube-proxy-flfgg                              1/1     Running   2          2d14h   172.31.11.87     k8s-workernode1   <none>           <none>
    kube-system   kube-scheduler-ip-172-31-6-149                1/1     Running   2          2d14h   172.31.6.149     ip-172-31-6-149   <none>           <none>

DNS pod json

k8suser@ip-172-31-6-149:~$ kubectl -n kube-system get -o yaml pod calico-node-298r4
apiVersion: v1
kind: Pod
metadata:
  annotations:
    scheduler.alpha.kubernetes.io/critical-pod: ""
  creationTimestamp: "2019-06-06T04:20:16Z"
  generateName: calico-node-
  labels:
    controller-revision-hash: 5b9bbb5cf5
    k8s-app: calico-node
    pod-template-generation: "1"
  name: calico-node-298r4
  namespace: kube-system
  ownerReferences:
  - apiVersion: apps/v1
    blockOwnerDeletion: true
    controller: true
    kind: DaemonSet
    name: calico-node
    uid: 62b222b1-8812-11e9-bccc-029ff954c4b8
  resourceVersion: "468710"
  selfLink: /api/v1/namespaces/kube-system/pods/calico-node-298r4
  uid: 634ee54d-8812-11e9-bccc-029ff954c4b8
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchFields:
          - key: metadata.name
            operator: In
            values:
            - ip-172-31-6-149
  containers:
  - env:
    - name: DATASTORE_TYPE
      value: kubernetes
    - name: WAIT_FOR_DATASTORE
      value: "true"
    - name: NODENAME
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: spec.nodeName
    - name: CALICO_NETWORKING_BACKEND
      valueFrom:
        configMapKeyRef:
          key: calico_backend
          name: calico-config
    - name: CLUSTER_TYPE
      value: k8s,bgp
    - name: IP
      value: autodetect
    - name: CALICO_IPV4POOL_IPIP
      value: Always
    - name: FELIX_IPINIPMTU
      valueFrom:
        configMapKeyRef:
          key: veth_mtu
          name: calico-config
    - name: CALICO_IPV4POOL_CIDR
      value: 192.168.0.0/16
    - name: CALICO_DISABLE_FILE_LOGGING
      value: "true"
    - name: FELIX_DEFAULTENDPOINTTOHOSTACTION
      value: ACCEPT
    - name: FELIX_IPV6SUPPORT
      value: "false"
    - name: FELIX_LOGSEVERITYSCREEN
      value: info
    - name: FELIX_HEALTHENABLED
      value: "true"
    image: calico/node:v3.7.2
    imagePullPolicy: IfNotPresent
    livenessProbe:
      failureThreshold: 6
      httpGet:
        host: localhost
        path: /liveness
        port: 9099
        scheme: HTTP
      initialDelaySeconds: 10
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 1
    name: calico-node
    readinessProbe:
      exec:
        command:
        - /bin/calico-node
        - -bird-ready
        - -felix-ready
      failureThreshold: 3
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 1
    resources:
      requests:
        cpu: 250m
    securityContext:
      privileged: true
      procMount: Default
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /lib/modules
      name: lib-modules
      readOnly: true
    - mountPath: /run/xtables.lock
      name: xtables-lock
    - mountPath: /var/run/calico
      name: var-run-calico
    - mountPath: /var/lib/calico
      name: var-lib-calico
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: calico-node-token-6xvr5
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  hostNetwork: true
  initContainers:
  - command:
    - /opt/cni/bin/calico-ipam
    - -upgrade
    env:
    - name: KUBERNETES_NODE_NAME
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: spec.nodeName
    - name: CALICO_NETWORKING_BACKEND
      valueFrom:
        configMapKeyRef:
          key: calico_backend
          name: calico-config
    image: calico/cni:v3.7.2
    imagePullPolicy: IfNotPresent
    name: upgrade-ipam
    resources: {}
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/lib/cni/networks
      name: host-local-net-dir
    - mountPath: /host/opt/cni/bin
      name: cni-bin-dir
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: calico-node-token-6xvr5
      readOnly: true
  - command:
    - /install-cni.sh
    env:
    - name: CNI_CONF_NAME
      value: 10-calico.conflist
    - name: CNI_NETWORK_CONFIG
      valueFrom:
        configMapKeyRef:
          key: cni_network_config
          name: calico-config
    - name: KUBERNETES_NODE_NAME
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: spec.nodeName
    - name: CNI_MTU
      valueFrom:
        configMapKeyRef:
          key: veth_mtu
          name: calico-config
    - name: SLEEP
      value: "false"
    image: calico/cni:v3.7.2
    imagePullPolicy: IfNotPresent
    name: install-cni
    resources: {}
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /host/opt/cni/bin
      name: cni-bin-dir
    - mountPath: /host/etc/cni/net.d
      name: cni-net-dir
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: calico-node-token-6xvr5
      readOnly: true
  nodeName: ip-172-31-6-149
  nodeSelector:
    beta.kubernetes.io/os: linux
  priority: 0
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext: {}
  serviceAccount: calico-node
  serviceAccountName: calico-node
  terminationGracePeriodSeconds: 0
  tolerations:
  - effect: NoSchedule
    operator: Exists
  - key: CriticalAddonsOnly
    operator: Exists
  - effect: NoExecute
    operator: Exists
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
  - effect: NoSchedule
    key: node.kubernetes.io/disk-pressure
    operator: Exists
  - effect: NoSchedule
    key: node.kubernetes.io/memory-pressure
    operator: Exists
  - effect: NoSchedule
    key: node.kubernetes.io/unschedulable
    operator: Exists
  - effect: NoSchedule
    key: node.kubernetes.io/network-unavailable
    operator: Exists
  volumes:
  - hostPath:
      path: /lib/modules
      type: ""
    name: lib-modules
  - hostPath:
      path: /var/run/calico
      type: ""
    name: var-run-calico
  - hostPath:
      path: /var/lib/calico
      type: ""
    name: var-lib-calico
  - hostPath:
      path: /run/xtables.lock
      type: FileOrCreate
    name: xtables-lock
  - hostPath:
      path: /opt/cni/bin
      type: ""
    name: cni-bin-dir
  - hostPath:
      path: /etc/cni/net.d
      type: ""
    name: cni-net-dir
  - hostPath:
      path: /var/lib/cni/networks
      type: ""
    name: host-local-net-dir
  - name: calico-node-token-6xvr5
    secret:
      defaultMode: 420
      secretName: calico-node-token-6xvr5
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2019-06-14T05:21:15Z"
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: "2019-06-11T14:42:00Z"
    message: 'containers with unready status: [calico-node]'
    reason: ContainersNotReady
    status: "False"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: "2019-06-11T14:42:00Z"
    message: 'containers with unready status: [calico-node]'
    reason: ContainersNotReady
    status: "False"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: "2019-06-06T04:20:16Z"
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: docker://a3f683c4143b1cc0da11e2fd715b65e07a75108a459cb160c3a81754875a2eab
    image: calico/node:v3.7.2
    imageID: docker-pullable://calico/node@sha256:8b565422f4cabd9652e0e912f3ea8707734cbc69f5835642f094d1ed0a087d5b
    lastState:
      terminated:
        containerID: docker://4665eed7b1e8ad2e43d4c29cd74172933da8f743c7afb7ee4b6697a6565e9e65
        exitCode: 0
        finishedAt: "2019-06-14T15:44:50Z"
        reason: Completed
        startedAt: "2019-06-14T05:21:15Z"
    name: calico-node
    ready: false
    restartCount: 6
    state:
      running:
        startedAt: "2019-06-15T05:47:06Z"
  hostIP: 172.31.6.149
  initContainerStatuses:
  - containerID: docker://d722b7e3e7157462acb49da14cc32422f3f67189ff3c3074d821563a46998640
    image: calico/cni:v3.7.2
    imageID: docker-pullable://calico/cni@sha256:9853acbb98f2225572a9374d9de5726dd93ae02ab397ca8b4ad24f953adf465c
    lastState: {}
    name: upgrade-ipam
    ready: true
    restartCount: 3
    state:
      terminated:
        containerID: docker://d722b7e3e7157462acb49da14cc32422f3f67189ff3c3074d821563a46998640
        exitCode: 0
        finishedAt: "2019-06-15T05:46:41Z"
        reason: Completed
        startedAt: "2019-06-15T05:46:41Z"
  - containerID: docker://d23fb5d8ddee775fd183cd4c552a376a3e49105e4f2472335ab9cc67cd883c8f
    image: calico/cni:v3.7.2
    imageID: docker-pullable://calico/cni@sha256:9853acbb98f2225572a9374d9de5726dd93ae02ab397ca8b4ad24f953adf465c
    lastState: {}
    name: install-cni
    ready: true
    restartCount: 0
    state:
      terminated:
        containerID: docker://d23fb5d8ddee775fd183cd4c552a376a3e49105e4f2472335ab9cc67cd883c8f
        exitCode: 0
        finishedAt: "2019-06-15T05:47:05Z"
        reason: Completed
        startedAt: "2019-06-15T05:47:05Z"
  phase: Running
  podIP: 172.31.6.149
  qosClass: Burstable
  startTime: "2019-06-06T04:20:16Z"

Подходящие журналы DNS: -

Calico node started successfully
bird: Unable to open configuration file /etc/calico/confd/config/bird6.cfg: No such file or directory
bird: Unable to open configuration file /etc/calico/confd/config/bird.cfg: No such file or directory
2019-06-15 05:47:08.005 [INFO][55] logutils.go 82: Early screen log level set to info
2019-06-15 05:47:08.005 [INFO][55] daemon.go 139: Felix starting up GOMAXPROCS=8 buildDate="" gitCommit="8ed6333006e4b04a744398c6eca9fde31e08b6d8" version="v3.7.2"
2019-06-15 05:47:08.005 [INFO][55] daemon.go 157: Loading configuration...
2019-06-15 05:47:08.006 [INFO][56] config.go 105: Skipping confd config file.
2019-06-15 05:47:08.006 [INFO][56] run.go 17: Starting calico-confd
2019-06-15 05:47:08.007 [INFO][56] k8s.go 228: Using Calico IPAM
2019-06-15 05:47:08.006 [INFO][55] env_var_loader.go 40: Found felix environment variable: "ipv6support"="false"

2019-06-15 05:47:08.010 [INFO][55] config_params.go 320: Parsing value for Ipv6Support: false (from environment variable)
2019-06-15 05:47:08.010 [INFO][55] config_params.go 356: Parsed value for Ipv6Support: false (from environment variable)

2019-06-15 05:47:08.016 [INFO][56] watchersyncer.go 89: Start called
2019-06-15 05:47:08.017 [INFO][56] client.go 183: CALICO_ADVERTISE_CLUSTER_IPS not specified, no cluster ips will be advertised
...