Я пытаюсь настроить kubernetes (из учебников для centos7) на трех виртуальных машинах,
к сожалению, присоединение рабочего не удается. Я надеюсь, что кто-то уже имел эту проблему (обнаружил ее два раза в Интернете без ответов), или мог бы догадаться, что происходит не так.
Вот что я получаю от kubeadm join:
[preflight] running pre-flight checks
[WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh] or no builtin kernel ipvs support: map[ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}]
you can solve this problem with following methods:
1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support
I0902 20:31:15.401693 2032 kernel_validator.go:81] Validating kernel version
I0902 20:31:15.401768 2032 kernel_validator.go:96] Validating kernel config
[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.06.1-ce. Max validated version: 17.03
[discovery] Trying to connect to API Server "192.168.1.30:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.1.30:6443"
[discovery] Requesting info from "https://192.168.1.30:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.1.30:6443"
[discovery] Successfully established connection with API Server "192.168.1.30:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
Хотя kublet работает:
[root@k8s-worker1 nodesetup]# systemctl status kubelet -l
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since So 2018-09-02 20:31:15 CEST; 19min ago
Docs: https://kubernetes.io/docs/
Main PID: 2093 (kubelet)
Tasks: 7
Memory: 12.1M
CGroup: /system.slice/kubelet.service
└─2093 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driver=cgroupfs --cni-bin-dir=/opt/cni/bin --cni-conf-dir=/etc/cni/net.d --network-plugin=cni
Sep 02 20:31:15 k8s-worker1 systemd[1]: Started kubelet: The Kubernetes Node Agent.
Sep 02 20:31:15 k8s-worker1 systemd[1]: Starting kubelet: The Kubernetes Node Agent...
Sep 02 20:31:15 k8s-worker1 kubelet[2093]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Sep 02 20:31:15 k8s-worker1 kubelet[2093]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Sep 02 20:31:16 k8s-worker1 kubelet[2093]: I0902 20:31:16.440010 2093 server.go:408] Version: v1.11.2
Sep 02 20:31:16 k8s-worker1 kubelet[2093]: I0902 20:31:16.440314 2093 plugins.go:97] No cloud provider specified.
[root@k8s-worker1 nodesetup]#
Насколько я вижу, рабочий может подключиться к мастеру, но он пытается запустить проверку работоспособности на каком-то локальном сервлете, который не появился. Есть идеи?
Вот что я сделал для настройки своего рабочего:
exec bash
setenforce 0
sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux
echo "Setting Firewallrules"
firewall-cmd --permanent --add-port=10250/tcp
firewall-cmd --permanent --add-port=10255/tcp
firewall-cmd --permanent --add-port=30000-32767/tcp
firewall-cmd --permanent --add-port=6783/tcp
firewall-cmd --reload
echo "And enable br filtering"
modprobe br_netfilter
echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables
echo "disable swap"
swapoff -a
echo "### You need to edit /etc/fstab and comment the swapline!! ###"
echo "Adding kubernetes repo for download"
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
echo "install the Docker-ce dependencies"
yum install -y yum-utils device-mapper-persistent-data lvm2
echo "add docker-ce repository"
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
echo "install docker ce"
yum install -y docker-ce
echo "Install kubeadm kubelet kubectl"
yum install kubelet kubeadm kubectl -y
echo "start and enable kubectl"
systemctl restart docker && systemctl enable docker
systemctl restart kubelet && systemctl enable kubelet
echo "Now we need to ensure that both Docker-ce and Kubernetes belong to the same control group (cgroup)"
echo "We assume that docker is using cgroupfs ... assuming kubelet does so too"
docker info | grep -i cgroup
grep -i cgroup /var/lib/kubelet/kubeadm-flags.env
# old style
# sed -i 's/cgroup-driver=systemd/cgroup-driver=cgroupfs/g' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
systemctl daemon-reload
systemctl restart kubelet
# There has been an issue reported that traffic in iptable is been routed incorrectly.
# Below settings will make sure IPTable is configured correctly.
#
sudo bash -c 'cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF'
# Make changes effective
sudo sysctl --system
Спасибо за любую помощь заранее.
Обновление I
Journalctl Выход от работника:
[root@k8s-worker1 ~]# journalctl -xeu kubelet
Sep 02 21:19:56 k8s-worker1 systemd[1]: Started kubelet: The Kubernetes Node Agent.
-- Subject: Unit kubelet.service has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit kubelet.service has finished starting up.
--
-- The start-up result is done.
Sep 02 21:19:56 k8s-worker1 systemd[1]: Starting kubelet: The Kubernetes Node Agent...
-- Subject: Unit kubelet.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit kubelet.service has begun starting up.
Sep 02 21:19:56 k8s-worker1 kubelet[3082]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --confi
Sep 02 21:19:56 k8s-worker1 kubelet[3082]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --confi
Sep 02 21:19:56 k8s-worker1 kubelet[3082]: I0902 21:19:56.788059 3082 server.go:408] Version: v1.11.2
Sep 02 21:19:56 k8s-worker1 kubelet[3082]: I0902 21:19:56.788214 3082 plugins.go:97] No cloud provider specified.
Sep 02 21:19:56 k8s-worker1 kubelet[3082]: F0902 21:19:56.814469 3082 server.go:262] failed to run Kubelet: cannot create certificate signing request: Unauthorized
Sep 02 21:19:56 k8s-worker1 systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
Sep 02 21:19:56 k8s-worker1 systemd[1]: Unit kubelet.service entered failed state.
Sep 02 21:19:56 k8s-worker1 systemd[1]: kubelet.service failed.
А в модуле get на стороне мастера получается:
[root@k8s-master ~]# kubectl get pods --all-namespaces=true
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-78fcdf6894-79n2m 0/1 Pending 0 1d
kube-system coredns-78fcdf6894-tlngr 0/1 Pending 0 1d
kube-system etcd-k8s-master 1/1 Running 3 1d
kube-system kube-apiserver-k8s-master 1/1 Running 0 1d
kube-system kube-controller-manager-k8s-master 0/1 Evicted 0 1d
kube-system kube-proxy-2x8cx 1/1 Running 3 1d
kube-system kube-scheduler-k8s-master 1/1 Running 0 1d
[root@k8s-master ~]#
Обновление II
В качестве следующего шага я сгенерировал новый токен на стороне мастера и использовал его в команде соединения. Хотя в списке главных токенов токен отображается как действительный, рабочий узел настаивает на том, что мастер не знает об этом токене или срок его действия истек .... стоп! Время начинать все сначала, начиная с основной настройки.
Итак, вот что я сделал:
1) выполнить перезагрузку главной виртуальной машины, что означает новую установку centos7 (CentOS-7-x86_64-Minimal-1804.iso) на виртуальный ящик. Сконфигурированная сеть von virtualbox: адаптер1 в качестве NAT для хост-системы (для возможности установки компонентов) и адаптер2 в качестве внутренней сети (одно и то же имя для главного и рабочего узлов сети kubernetes).
2) При установленном новом образе базовый интерфейс enp0s3 не был настроен для запуска во время загрузки (поэтому ifup enp03s и перенастроен в / etc / sysconfig / network-script для запуска во время загрузки).
3) Настройка второго интерфейса для внутренней сети kubernetes:
/ и т.д. / хосты:
#!/bin/sh
echo '192.168.1.30 k8s-master' >> /etc/hosts
echo '192.168.1.40 k8s-worker1' >> /etc/hosts
echo '192.168.1.50 k8s-worker2' >> /etc/hosts
Идентифицировал мой второй интерфейс через "ip -color -human addr", который показал мне enp0S8 в моем случае, так:
#!/bin/sh
echo "Setting up internal Interface"
cat <<EOF > /etc/sysconfig/network-scripts/ifcfg-enp0s8
DEVICE=enp0s8
IPADDR=192.168.1.30
NETMASK=255.255.255.0
NETWORK=192.168.1.0
BROADCAST=192.168.1.255
ONBOOT=yes
NAME=enp0s8
EOF
echo "Activate interface"
ifup enp0s8
4) Имя хоста, своп, отключение SELinux
#!/bin/sh
echo "Setting hostname und deactivate SELinux"
hostnamectl set-hostname 'k8s-master'
exec bash
setenforce 0
sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux
echo "disable swap"
swapoff -a
echo "### You need to edit /etc/fstab and comment the swapline!! ###"
Несколько замечаний: я перезагрузился, когда увидел, что более поздние проверки перед проверкой, кажется, анализируют / etc / fstab, чтобы увидеть, что подкачка не существует. Также кажется, что centos реактивирует SElinux (мне нужно проверить это позже) в качестве обходного пути, я отключал его снова после каждой перезагрузки.
5) Установить необходимые настройки брандмауэра
#!/bin/sh
echo "Setting Firewallrules"
firewall-cmd --permanent --add-port=6443/tcp
firewall-cmd --permanent --add-port=2379-2380/tcp
firewall-cmd --permanent --add-port=10250/tcp
firewall-cmd --permanent --add-port=10251/tcp
firewall-cmd --permanent --add-port=10252/tcp
firewall-cmd --permanent --add-port=10255/tcp
firewall-cmd --reload
echo "And enable br filtering"
modprobe br_netfilter
echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables
6) Добавление хранилища kubernetes
#!/bin/sh
echo "Adding kubernetes repo for download"
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
7) Установите необходимые пакеты и настройте службы
#!/bin/sh
echo "install the Docker-ce dependencies"
yum install -y yum-utils device-mapper-persistent-data lvm2
echo "add docker-ce repository"
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
echo "install docker ce"
yum install -y docker-ce
echo "Install kubeadm kubelet kubectl"
yum install kubelet kubeadm kubectl -y
echo "start and enable kubectl"
systemctl restart docker && systemctl enable docker
systemctl restart kubelet && systemctl enable kubelet
echo "Now we need to ensure that both Docker-ce and Kubernetes belong to the same control group (cgroup)"
echo "We assume that docker is using cgroupfs ... assuming kubelet does so too"
docker info | grep -i cgroup
grep -i cgroup /var/lib/kubelet/kubeadm-flags.env
# old style
# sed -i 's/cgroup-driver=systemd/cgroup-driver=cgroupfs/g' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
systemctl daemon-reload
systemctl restart kubelet
# There has been an issue reported that traffic in iptable is been routed incorrectly.
# Below settings will make sure IPTable is configured correctly.
#
sudo bash -c 'cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF'
# Make changes effective
sudo sysctl --system
8) Инициализация кластера
#!/bin/sh
echo "Init kubernetes. Check join cmd in initProtocol.txt"
kubeadm init --apiserver-advertise-address=192.168.1.30 --pod-network-cidr=192.168.1.0/16 | tee initProtocol.txt
Для проверки вот результат этой команды:
Init kubernetes. Check join cmd in initProtocol.txt
[init] using Kubernetes version: v1.11.2
[preflight] running pre-flight checks
[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
I0904 21:53:15.271999 1526 kernel_validator.go:81] Validating kernel version
I0904 21:53:15.272165 1526 kernel_validator.go:96] Validating kernel config
[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.06.1-ce. Max validated version: 17.03
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.30]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.1.30 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 43.504792 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node k8s-master as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node k8s-master as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-master" as an annotation
[bootstraptoken] using token: n4yt3r.3c8tuj11nwszts2d
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join 192.168.1.30:6443 --token n4yt3r.3c8tuj11nwszts2d --discovery-token-ca-cert-hash sha256:466e7972a4b6997651ac1197fdde68d325a7bc41f2fccc2b1efc17515af61172
Замечание: пока у меня все в порядке, хотя я немного обеспокоен тем, что последняя версия docker-ce может вызвать проблемы ...
9) Развертывание сети pod
#!/bin/bash
echo "Configure demo cluster usage as root"
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
# Deploy-Network using flanel
# Taken from first matching two tutorials on the web
# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml
# taken from https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#pod-network
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/c5d10c8/Documentation/kube-flannel.yml
echo "Try to run kubectl get pods --all-namespaces"
echo "After joining nodes: try to run kubectl get nodes to verify the status"
И вот вывод этой команды:
Configure demo cluster usage as root
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds created
clusterrole.rbac.authorization.k8s.io/flannel configured
clusterrolebinding.rbac.authorization.k8s.io/flannel configured
serviceaccount/flannel unchanged
configmap/kube-flannel-cfg unchanged
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created
Try to run kubectl get pods --all-namespaces
After joining nodes: try to run kubectl get nodes to verify the status
Итак, я попробовал kubectl get pods --all-namespaces, и я получил
[root@k8s-master nodesetup]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-78fcdf6894-pflhc 0/1 Pending 0 33m
kube-system coredns-78fcdf6894-w7dxg 0/1 Pending 0 33m
kube-system etcd-k8s-master 1/1 Running 0 27m
kube-system kube-apiserver-k8s-master 1/1 Running 0 27m
kube-system kube-controller-manager-k8s-master 0/1 Evicted 0 27m
kube-system kube-proxy-stfxm 1/1 Running 0 28m
kube-system kube-scheduler-k8s-master 1/1 Running 0 27m
и
[root@k8s-master nodesetup]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master NotReady master 35m v1.11.2
Хм ... что не так с моим хозяином?
Некоторые наблюдения:
Когда я получал отказ в соединении при запуске kubectl в начале, я обнаружил, что до установки службы требуется несколько минут. Но из-за этого я искал в / var / log / firewalld и нашел много таких:
2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t nat -D PREROUTING' failed: iptables: Bad rule (does a matching rule exist in that chain?).
2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t nat -D OUTPUT' failed: iptables: Bad rule (does a matching rule exist in that chain?).
2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t nat -F DOCKER' failed: iptables: No chain/target/match by that name.
2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t nat -X DOCKER' failed: iptables: No chain/target/match by that name.
2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -F DOCKER' failed: iptables: No chain/target/match by that name.
2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -X DOCKER' failed: iptables: No chain/target/match by that name.
2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -F DOCKER-ISOLATION-STAGE-1' failed: iptables: No chain/target/match by that name.
2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -X DOCKER-ISOLATION-STAGE-1' failed: iptables: No chain/target/match by that name.
2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -F DOCKER-ISOLATION-STAGE-2' failed: iptables: No chain/target/match by that name.
2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -X DOCKER-ISOLATION-STAGE-2' failed: iptables: No chain/target/match by that name.
2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -F DOCKER-ISOLATION' failed: iptables: No chain/target/match by that name.
2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -X DOCKER-ISOLATION' failed: iptables: No chain/target/match by that name.
2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t nat -n -L DOCKER' failed: iptables: No chain/target/match by that name.
2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -n -L DOCKER' failed: iptables: No chain/target/match by that name.
2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -n -L DOCKER-ISOLATION-STAGE-1' failed: iptables: No chain/target/match by that name.
2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -n -L DOCKER-ISOLATION-STAGE-2' failed: iptables: No chain/target/match by that name.
2018-09-04 21:52:09 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -C DOCKER-ISOLATION-STAGE-1 -j RETURN' failed: iptables: Bad rule (does a matching rule exist in that chain?).
Неправильная версия докера? Установки установки докера, кажется, не работает.
Все остальное я могу проверить на стороне мастера ...
Будет поздно - завтра я снова попытаюсь присоединиться к своему работнику (в пределах 24-часового диапазона начального периода токена).
Обновление III (после решения проблемы с докером)
[root@k8s-master ~]# kubectl get pods --all-namespaces=true
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-78fcdf6894-pflhc 0/1 Pending 0 10h
kube-system coredns-78fcdf6894-w7dxg 0/1 Pending 0 10h
kube-system etcd-k8s-master 1/1 Running 0 10h
kube-system kube-apiserver-k8s-master 1/1 Running 0 10h
kube-system kube-controller-manager-k8s-master 1/1 Running 0 10h
kube-system kube-flannel-ds-amd64-crljm 0/1 Pending 0 1s
kube-system kube-flannel-ds-v6gcx 0/1 Pending 0 0s
kube-system kube-proxy-l2dck 0/1 Pending 0 0s
kube-system kube-scheduler-k8s-master 1/1 Running 0 10h
[root@k8s-master ~]#
И мастер выглядит счастливым
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 10h v1.11.2
[root@k8s-master ~]#
Оставайтесь с нами ... после работы я исправляю docker / firewall на рабочем компьютере и буду пытаться снова присоединиться к кластеру (теперь я знаю, как выпустить новый токен, если требуется). Таким образом, обновление IV последует примерно через 10 часов