сервер kube-api не запускается (CrashLoopBackOff) - PullRequest
0 голосов
/ 05 января 2020

Я не могу запустить kube-apiserver на моем кластере с одним главным узлом. kubelet продолжает пытаться запустить службу, но все время получает CrashLoopBackOff. я пытался запустить контейнер с помощью команды docker run, и я получаю следующий журнал, я не понимаю, почему я не вижу службу 6443 или 443 при прослушивании, когда я netstat.

core@core-01 ~ $ etcdctl get /coreos.com/network/subnets/10.2.41.0-24
{"PublicIP":"10.0.2.11","BackendType":"vxlan","BackendData":{"VtepMAC":"e2:41:48:bc:6e:31"}}

core@core-01 ~ $ etcdctl get /coreos.com/network/config
{ "Network": "10.2.0.0/16", "SubnetLen": 24, "Backend": { "Type": "vxlan", "VNI": 1 } }

core@core-01 ~ $ etcdctl cluster-health
member b12eaa0af14319e0 is healthy: got healthy result from https://10.0.2.11:2379
cluster is healthy


core@core-01 ~ $ journalctl -fu flanneld
-- Logs begin at Sun 2020-01-05 20:09:44 UTC. --
Jan 05 20:30:11 core-01 flannel-wrapper[829]: I0105 20:30:11.451701     829 iptables.go:137] Deleting iptables rule: ! -s 10.2.0.0/16 -d 10.2.0.0/16 -j MASQUERADE
Jan 05 20:30:11 core-01 flannel-wrapper[829]: I0105 20:30:11.455149     829 iptables.go:125] Adding iptables rule: -s 10.2.0.0/16 -d 10.2.0.0/16 -j RETURN
Jan 05 20:30:11 core-01 flannel-wrapper[829]: I0105 20:30:11.464136     829 iptables.go:125] Adding iptables rule: -s 10.2.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE
Jan 05 20:30:11 core-01 flannel-wrapper[829]: I0105 20:30:11.483193     829 iptables.go:125] Adding iptables rule: ! -s 10.2.0.0/16 -d 10.2.67.0/24 -j RETURN
Jan 05 20:30:11 core-01 flannel-wrapper[829]: I0105 20:30:11.503353     829 iptables.go:125] Adding iptables rule: ! -s 10.2.0.0/16 -d 10.2.0.0/16 -j MASQUERADE
Jan 05 20:30:12 core-01 flannel-wrapper[829]: I0105 20:30:12.178567     829 iptables.go:115] Some iptables rules are missing; deleting and recreating rules
Jan 05 20:30:12 core-01 flannel-wrapper[829]: I0105 20:30:12.178601     829 iptables.go:137] Deleting iptables rule: -s 10.2.0.0/16 -j ACCEPT
Jan 05 20:30:12 core-01 flannel-wrapper[829]: I0105 20:30:12.182925     829 iptables.go:137] Deleting iptables rule: -d 10.2.0.0/16 -j ACCEPT
Jan 05 20:30:12 core-01 flannel-wrapper[829]: I0105 20:30:12.184853     829 iptables.go:125] Adding iptables rule: -s 10.2.0.0/16 -j ACCEPT
Jan 05 20:30:12 core-01 flannel-wrapper[829]: I0105 20:30:12.191388     829 iptables.go:125] Adding iptables rule: -d 10.2.0.0/16 -j ACCEPT


core@core-01 ~ $ journalctl -fu etcd-member
-- Logs begin at Sun 2020-01-05 20:09:44 UTC. --
Jan 05 20:30:01 core-01 etcd-wrapper[724]: 2020-01-05 20:30:01.402265 I | raft: b12eaa0af14319e0 became leader at term 3
Jan 05 20:30:01 core-01 etcd-wrapper[724]: 2020-01-05 20:30:01.402436 I | raft: raft.node: b12eaa0af14319e0 elected leader b12eaa0af14319e0 at term 3
Jan 05 20:30:01 core-01 etcd-wrapper[724]: 2020-01-05 20:30:01.407687 I | etcdserver: published {Name:core-01 ClientURLs:[https://10.0.2.11:2379]} to cluster f42ef6de7357f6b9
Jan 05 20:30:01 core-01 etcd-wrapper[724]: 2020-01-05 20:30:01.409961 I | embed: ready to serve client requests
Jan 05 20:30:01 core-01 etcd-wrapper[724]: 2020-01-05 20:30:01.413929 I | embed: serving client requests on 127.0.0.1:2379
Jan 05 20:30:01 core-01 etcd-wrapper[724]: 2020-01-05 20:30:01.414398 I | embed: ready to serve client requests
Jan 05 20:30:01 core-01 etcd-wrapper[724]: 2020-01-05 20:30:01.414844 I | embed: serving client requests on 10.0.2.11:2379
Jan 05 20:30:01 core-01 etcd-wrapper[724]: 2020-01-05 20:30:01.415087 I | embed: ready to serve client requests
Jan 05 20:30:01 core-01 etcd-wrapper[724]: 2020-01-05 20:30:01.416808 I | embed: serving client requests on 127.0.0.1:4001

core@core-01 ~ $ docker run -v /etc/kubernetes/ssl/:/etc/kubernetes/ssl/ quay.io/coreos/hyperkube:v1.6.1_coreos.0 /hyperkube apiserver --etcd-servers="https://10.0.2.11:2379" --allow-privileged=true  --service-cluster-ip-range="10.3.0.0/24" --secure-port=443 --advertise-address=10.0.2.11 --bind-address=0.0.0.0 --tls-cert-file=/etc/kubernetes/ssl/apiserver-etcd-client.pem --tls-private-key-file=/etc/kubernetes/ssl/apiserver-etcd-client-key.pem --client-ca-file=/etc/kubernetes/ssl/ca.pem --service-account-key-file=/etc/kubernetes/ssl/apiserver-etcd-client-key.pem --runtime-config=extensions/v1beta1/networkpolicies=true --anonymous-auth=true
W0105 20:25:36.153523       1 authentication.go:362] AnonymousAuth is not allowed with the AllowAll authorizer.  Resetting AnonymousAuth to false. You should use a different authorizer
E0105 20:25:37.548076       1 reflector.go:201] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:70: Failed to list *api.Secret: Get https://localhost:443/api/v1/secrets?resourceVersion=0: dial tcp 127.0.0.1:443: getsockopt: connection refused
[restful] 2020/01/05 20:25:37 log.go:30: [restful/swagger] listing is available at https://10.0.2.11:443/swaggerapi/
[restful] 2020/01/05 20:25:37 log.go:30: [restful/swagger] https://10.0.2.11:443/swaggerui/ is mapped to folder /swagger-ui/
I0105 20:25:38.096619       1 serve.go:79] Serving securely on 0.0.0.0:443
I0105 20:25:38.097522       1 serve.go:94] Serving insecurely on 127.0.0.1:8080
I0105 20:26:07.098874       1 trace.go:61] Trace "Create /api/v1/namespaces" (started 2020-01-05 20:25:38.179779805 +0000 UTC):
[40.586µs] [40.586µs] About to convert to expected version
[1.500802ms] [1.460216ms] Conversion done
[1.506436ms] [5.634µs] About to store object in database
"Create /api/v1/namespaces" [28.918989614s] [28.917483178s] END
E0105 20:26:07.100872       1 client_ca_hook.go:58] Timeout: request did not complete within allowed duration
W0105 20:26:36.010336       1 storage_extensions.go:127] third party resource sync failed: the server cannot complete the requested operation at this time, try again later (get thirdpartyresources.extensions)
E0105 20:26:36.381565       1 reflector.go:201] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:70: Failed to list *api.Secret: the server cannot complete the requested operation at this time, try again later (get secrets)
E0105 20:26:36.936321       1 storage_rbac.go:140] unable to initialize clusterroles: the server cannot complete the requested operation at this time, try again later (get clusterroles.rbac.authorization.k8s.io)
E0105 20:27:34.774087       1 storage_rbac.go:140] unable to initialize clusterroles: the server cannot complete the requested operation at this time, try again later (get clusterroles.rbac.authorization.k8s.io)
F0105 20:27:34.774160       1 hooks.go:110] PostStartHook "rbac/bootstrap-roles" failed: unable to initialize roles: timed out waiting for the condition

core@core-01 ~ $ journalctl -fu kubelet
-- Logs begin at Sun 2020-01-05 20:09:44 UTC. --
Jan 05 20:43:53 core-01 kubelet-wrapper[745]: I0105 20:43:53.245043     745 kuberuntime_manager.go:768] Back-off 5m0s restarting failed container=kube-apiserver pod=kube-apiserver-10.0.2.11_kube-system(c1d216376c7569eb905e9536d6c7bf15)Jan 05 20:43:53 core-01 kubelet-wrapper[745]: E0105 20:43:53.245448     745 pod_workers.go:186] Error syncing pod c1d216376c7569eb905e9536d6c7bf15 ("kube-apiserver-10.0.2.11_kube-system(c1d216376c7569eb905e9536d6c7bf15)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kube-apiserver pod=kube-apiserver-10.0.2.11_kube-system(c1d216376c7569eb905e9536d6c7bf15)"
Jan 05 20:43:53 core-01 kubelet-wrapper[745]: I0105 20:43:53.426322     745 kubelet_node_status.go:273] Setting node annotation to enable volume controller attach/detach
Jan 05 20:44:03 core-01 kubelet-wrapper[745]: I0105 20:44:03.484748     745 kubelet_node_status.go:273] Setting node annotation to enable volume controller attach/detach
Jan 05 20:44:04 core-01 kubelet-wrapper[745]: I0105 20:44:04.932047     745 kubelet_node_status.go:273] Setting node annotation to enable volume controller attach/detach
Jan 05 20:44:05 core-01 kubelet-wrapper[745]: I0105 20:44:05.246439     745 kuberuntime_manager.go:514] Container {Name:kube-apiserver Image:quay.io/coreos/hyperkube:v1.6.1_coreos.0 Command:[/hyperkube apiserver --bind-address=0.0.0.0 --etcd-servers="https://10.0.2.11:2379" --allow-privileged=true --service-cluster-ip-range="10.3.0.0/24" --secure-port=443 --advertise-address=10.0.2.11 --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota --tls-cert-file=/etc/ssl/certs/apiserver-etcd-client.pem --tls-private-key-file=/etc/ssl/certs/apiserver-etcd-client-key.pem --client-ca-file=/etc/ssl/certs/ca.pem --service-account-key-file=/etc/ssl/certs/apiserver-etcd-client-key.pem --runtime-config=extensions/v1beta1/networkpolicies=true --anonymous-auth=false] Args:[] WorkingDir: Ports:[{Name:https HostPort:443 ContainerPort:443 Protocol:TCP HostIP:} {Name:local HostPort:8080 ContainerPort:8080 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:ssl-certs-kubernetes ReadOnly:true MountPath:/etc/kubernetes/ssl SubPath: MountPropagation:<nil>} {Name:ssl-certs-host ReadOnly:true MountPath:/etc/ssl/certs SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:8080,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.Jan 05 20:44:05 core-01 kubelet-wrapper[745]: I0105 20:44:05.247913     745 kuberuntime_manager.go:758] checking backoff for container "kube-apiserver" in pod "kube-apiserver-10.0.2.11_kube-system(c1d216376c7569eb905e9536d6c7bf15)"Jan 05 20:44:05 core-01 kubelet-wrapper[745]: I0105 20:44:05.248683     745 kuberuntime_manager.go:768] Back-off 5m0s restarting failed container=kube-apiserver pod=kube-apiserver-10.0.2.11_kube-system(c1d216376c7569eb905e9536d6c7bf15)
Jan 05 20:44:05 core-01 kubelet-wrapper[745]: E0105 20:44:05.249152     745 pod_workers.go:186] Error syncing pod c1d216376c7569eb905e9536d6c7bf15 ("kube-apiserver-10.0.2.11_kube-system(c1d216376c7569eb905e9536d6c7bf15)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kube-apiserver pod=kube-apiserver-10.0.2.11_kube-system(c1d216376c7569eb905e9536d6c7bf15)"
Jan 05 20:44:13 core-01 kubelet-wrapper[745]: I0105 20:44:13.542958     745 kubelet_node_status.go:273] Setting node annotation to enable volume controller attach/detach
Jan 05 20:44:17 core-01 kubelet-wrapper[745]: I0105 20:44:17.960004     745 kubelet_node_status.go:273] Setting node annotation to enable volume controller attach/detach
Jan 05 20:44:18 core-01 kubelet-wrapper[745]: I0105 20:44:18.309348     745 kuberuntime_manager.go:514] Container {Name:kube-apiserver Image:quay.io/coreos/hyperkube:v1.6.1_coreos.0 Command:[/hyperkube apiserver --bind-address=0.0.0.0 --etcd-servers="https://10.0.2.11:2379" --allow-privileged=true --service-cluster-ip-range="10.3.0.0/24" --secure-port=443 --advertise-address=10.0.2.11 --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota --tls-cert-file=/etc/ssl/certs/apiserver-etcd-client.pem --tls-private-key-file=/etc/ssl/certs/apiserver-etcd-client-key.pem --client-ca-file=/etc/ssl/certs/ca.pem --service-account-key-file=/etc/ssl/certs/apiserver-etcd-client-key.pem --runtime-config=extensions/v1beta1/networkpolicies=true --anonymous-auth=false] Args:[] WorkingDir: Ports:[{Name:https HostPort:443 ContainerPort:443 Protocol:TCP HostIP:} {Name:local HostPort:8080 ContainerPort:8080 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:ssl-certs-kubernetes ReadOnly:true MountPath:/etc/kubernetes/ssl SubPath: MountPropagation:<nil>} {Name:ssl-certs-host ReadOnly:true MountPath:/etc/ssl/certs SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:8080,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Jan 05 20:44:18 core-01 kubelet-wrapper[745]: I0105 20:44:18.311411     745 kuberuntime_manager.go:758] checking backoff for container "kube-apiserver" in pod "kube-apiserver-10.0.2.11_kube-system(c1d216376c7569eb905e9536d6c7bf15)"
Jan 05 20:44:18 core-01 kubelet-wrapper[745]: I0105 20:44:18.312089     745 kuberuntime_manager.go:768] Back-off 5m0s restarting failed container=kube-apiserver pod=kube-apiserver-10.0.2.11_kube-system(c1d216376c7569eb905e9536d6c7bf15)
Jan 05 20:44:18 core-01 kubelet-wrapper[745]: E0105 20:44:18.313299     745 pod_workers.go:186] Error syncing pod c1d216376c7569eb905e9536d6c7bf15 ("kube-apiserver-10.0.2.11_kube-system(c1d216376c7569eb905e9536d6c7bf15)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kube-apiserver pod=kube-apiserver-10.0.2.11_kube-system(c1d216376c7569eb905e9536d6c7bf15)"

РЕДАКТИРОВАТЬ обновление версии не решает проблему

core@core-01 ~ $ docker run -v /etc/kubernetes/ssl/:/etc/kubernetes/ssl/ quay.io/coreos/hyperkube:v1.9.6_coreos.2 /hyperkube apiserver --etcd-servers="https://10.0.2.11:2379" --allow-privileged=true  --service-cluster-ip-range="10.3.0.0/24" --secure-port=443 --advertise-address=10.0.2.11 --insecure-bind-address=0.0.0.0 --tls-cert-file=/etc/kubernetes/ssl/apiserver-etcd-client.pem --tls-private-key-file=/etc/kubernetes/ssl/apiserver-etcd-client-key.pem --client-ca-file=/etc/kubernetes/ssl/ca.pem --service-account-key-file=/etc/kubernetes/ssl/apiserver-etcd-client-key.pem --runtime-config=extensions/v1beta1/networkpolicies=true --anonymous-auth=true      
I0106 23:10:42.463150       1 server.go:121] Version: v1.9.6+coreos.2
W0106 23:10:42.463629       1 authentication.go:378] AnonymousAuth is not allowed with the AllowAll authorizer.  Resetting AnonymousAuth to false. You should use a different authorizer
I0106 23:10:42.699897       1 master.go:225] Using reconciler: master-count
W0106 23:10:42.751993       1 genericapiserver.go:342] Skipping API batch/v2alpha1 because it has no resources.
W0106 23:10:42.795606       1 genericapiserver.go:342] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0106 23:10:42.796803       1 genericapiserver.go:342] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0106 23:10:42.858587       1 genericapiserver.go:342] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
[restful] 2020/01/06 23:10:43 log.go:33: [restful/swagger] listing is available at https://10.0.2.11:443/swaggerapi
[restful] 2020/01/06 23:10:43 log.go:33: [restful/swagger] https://10.0.2.11:443/swaggerui/ is mapped to folder /swagger-ui/
[restful] 2020/01/06 23:10:43 log.go:33: [restful/swagger] listing is available at https://10.0.2.11:443/swaggerapi
[restful] 2020/01/06 23:10:43 log.go:33: [restful/swagger] https://10.0.2.11:443/swaggerui/ is mapped to folder /swagger-ui/
I0106 23:10:49.341498       1 insecure_handler.go:121] Serving insecurely on 0.0.0.0:8080
I0106 23:10:49.343053       1 serve.go:89] Serving securely on [::]:443
I0106 23:10:49.343268       1 apiservice_controller.go:112] Starting APIServiceRegistrationController
I0106 23:10:49.343391       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0106 23:10:49.346030       1 crd_finalizer.go:242] Starting CRDFinalizer
I0106 23:10:49.346531       1 available_controller.go:262] Starting AvailableConditionController
I0106 23:10:49.346710       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0106 23:10:49.346838       1 controller.go:84] Starting OpenAPI AggregationController
I0106 23:10:49.348542       1 crdregistration_controller.go:110] Starting crd-autoregister controller
I0106 23:10:49.348697       1 controller_utils.go:1019] Waiting for caches to sync for crd-autoregister controller
I0106 23:10:49.349227       1 customresource_discovery_controller.go:152] Starting DiscoveryController
I0106 23:10:49.349338       1 naming_controller.go:274] Starting NamingConditionController
E0106 23:11:49.368395       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/pkg/client/informers/internalversion/factory.go:73: Failed to list *apiextensions.CustomResourceDefinition: the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io)
E0106 23:11:49.370175       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Service: the server was unable to return a response in the time allotted, but may still be processing the request (get services)
E0106 23:11:49.370252       1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:85: Failed to list *core.Secret: the server was unable to return a response in the time allotted, but may still be processing the request (get secrets)
E0106 23:11:49.370287       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Endpoints: the server was unable to return a response in the time allotted, but may still be processing the request (get endpoints)
E0106 23:11:49.371136       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/kube-aggregator/pkg/client/informers/internalversion/factory.go:73: Failed to list *apiregistration.APIService: the server was unable to return a response in the time allotted, but may still be processing the request (get apiservices.apiregistration.k8s.io)
I0106 23:12:20.349799       1 trace.go:76] Trace[1298498081]: "Create /api/v1/namespaces" (started: 2020-01-06 23:11:50.348500756 +0000 UTC m=+68.076132988) (total time: 30.001224437s):
Trace[1298498081]: [30.001224437s] [30.000912177s] END
E0106 23:12:20.350998       1 client_ca_hook.go:78] Timeout: request did not complete within allowed duration
E0106 23:12:50.369445       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/pkg/client/informers/internalversion/factory.go:73: Failed to list *apiextensions.CustomResourceDefinition: the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io)
E0106 23:12:50.371400       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Service: the server was unable to return a response in the time allotted, but may still be processing the request (get services)
E0106 23:12:50.372842       1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:85: Failed to list *core.Secret: the server was unable to return a response in the time allotted, but may still be processing the request (get secrets)
E0106 23:12:50.373942       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Endpoints: the server was unable to return a response in the time allotted, but may still be processing the request (get endpoints)
E0106 23:12:50.375319       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/kube-aggregator/pkg/client/informers/internalversion/factory.go:73: Failed to list *apiregistration.APIService: the server was unable to return a response in the time allotted, but may still be processing the request (get apiservices.apiregistration.k8s.io)
I0106 23:13:50.353033       1 trace.go:76] Trace[2019727887]: "Create /api/v1/namespaces" (started: 2020-01-06 23:13:20.352630686 +0000 UTC m=+158.080262917) (total time: 30.000382718s):
Trace[2019727887]: [30.000382718s] [30.000344689s] END
E0106 23:13:50.353369       1 client_ca_hook.go:78] Timeout: request did not complete within allowed duration
F0106 23:13:50.353386       1 hooks.go:188] PostStartHook "ca-registration" failed: unable to initialize client CA configmap: timed out waiting for the condition

1 Ответ

0 голосов
/ 02 мая 2020

Попробуйте очистить узел.

#!/bin/sh
docker rm -f $(docker ps -qa)
docker volume prune -f
cleanupdirs="/var/lib/etcd /etc/kubernetes /etc/cni /opt/cni /var/lib/cni /var/run/calico /opt/rke"
for dir in $cleanupdirs; do
  echo "Removing $dir"
  rm -rf $dir
done

Источник : https://gist.github.com/superseb/2cf186726807a012af59a027cb41270d

...