Я использую эту команду в Helm 3 для установки панели управления kubernetes 2.2.0 в kubernetes v1.18, ОС - CentOS 8:
helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/
helm repo update
helm install k8s-dashboard/kubernetes-dashboard --generate-name --version 2.2.0
установка прошла успешно, но когда я проверяю модуль status, он показывает CrashLoopBackOff
вот так:
[root@localhost ~]# kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default kubernetes-dashboard-1594440918-549c59c487-h8z9l 0/1 CrashLoopBackOff 15 87m 10.11.157.65 k8sslave1 <none> <none>
default traefik-5f95ff4766-vg8gx 1/1 Running 0 34m 10.11.125.129 k8sslave2 <none> <none>
kube-system calico-kube-controllers-75d555c48-lt4jr 1/1 Running 0 36h 10.11.102.134 localhost.localdomain <none> <none>
kube-system calico-node-6rj58 1/1 Running 0 14h 192.168.31.30 k8sslave1 <none> <none>
kube-system calico-node-czhww 1/1 Running 0 36h 192.168.31.29 localhost.localdomain <none> <none>
kube-system calico-node-vwr5w 1/1 Running 0 36h 192.168.31.31 k8sslave2 <none> <none>
kube-system coredns-546565776c-45jr5 1/1 Running 40 4d13h 10.11.102.132 localhost.localdomain <none> <none>
kube-system coredns-546565776c-zjwg7 1/1 Running 0 4d13h 10.11.102.129 localhost.localdomain <none> <none>
kube-system etcd-localhost.localdomain 1/1 Running 0 14h 192.168.31.29 localhost.localdomain <none> <none>
kube-system kube-apiserver-localhost.localdomain 1/1 Running 0 14h 192.168.31.29 localhost.localdomain <none> <none>
kube-system kube-controller-manager-localhost.localdomain 1/1 Running 0 14h 192.168.31.29 localhost.localdomain <none> <none>
kube-system kube-proxy-8z9vs 1/1 Running 0 38h 192.168.31.31 k8sslave2 <none> <none>
kube-system kube-proxy-dnpc6 1/1 Running 0 4d13h 192.168.31.29 localhost.localdomain <none> <none>
kube-system kube-proxy-s5t5r 1/1 Running 0 14h 192.168.31.30 k8sslave1 <none> <none>
kube-system kube-scheduler-localhost.localdomain 1/1 Running 0 14h 192.168.31.29 localhost.localdomain <none> <none>
, поэтому я просто проверяю журналы модуля панели управления kubernetes и смотрю, что происходит:
[root@localhost ~]# kubectl logs kubernetes-dashboard-1594440918-549c59c487-h8z9l
2020/07/11 05:44:13 Starting overwatch
2020/07/11 05:44:13 Using namespace: default
2020/07/11 05:44:13 Using in-cluster config to connect to apiserver
2020/07/11 05:44:13 Using secret token for csrf signing
2020/07/11 05:44:13 Initializing csrf token from kubernetes-dashboard-csrf secret
panic: Get "https://10.20.0.1:443/api/v1/namespaces/default/secrets/kubernetes-dashboard-csrf": dial tcp 10.20.0.1:443: i/o timeout
goroutine 1 [running]:
github.com/kubernetes/dashboard/src/app/backend/client/csrf.(*csrfTokenManager).init(0xc0000a2080)
/home/runner/work/dashboard/dashboard/src/app/backend/client/csrf/manager.go:41 +0x446
github.com/kubernetes/dashboard/src/app/backend/client/csrf.NewCsrfTokenManager(...)
/home/runner/work/dashboard/dashboard/src/app/backend/client/csrf/manager.go:66
github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).initCSRFKey(0xc0005a4100)
/home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:501 +0xc6
github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).init(0xc0005a4100)
/home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:469 +0x47
github.com/kubernetes/dashboard/src/app/backend/client.NewClientManager(...)
/home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:550
main.main()
/home/runner/work/dashboard/dashboard/src/app/backend/dashboard.go:105 +0x20d
Я пытаюсь получить доступ к этому ресурсу, используя curl на хост-машине, чтобы узнать, правильно ли отвечает главный сервер:
[root@localhost ~]# curl -k https://10.20.0.1:443/api/v1/namespaces/default/secrets/kubernetes-dashboard-csrf
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "secrets \"kubernetes-dashboard-csrf\" is forbidden: User \"system:anonymous\" cannot get resource \"secrets\" in API group \"\" in the namespace \"default\"",
"reason": "Forbidden",
"details": {
"name": "kubernetes-dashboard-csrf",
"kind": "secrets"
},
"code": 403
}
это мой главный узел и состояние firewalld k8sslave1:
[root@localhost ~]# systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
Active: inactive (dead)
Docs: man:firewalld(1)
[root@k8sslave1 ~]# systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor >
Active: inactive (dead)
Docs: man:firewalld(1)
lines 1-4/4 (END)
так в чем же проблема? что мне делать, чтобы дашборд работал успешно?