Minikube - Eclipse Ditto - ошибка при получении текущей конфигурации - PullRequest
0 голосов
/ 07 января 2020

версия minikube: v1.6.1 linux -kernel: 5.4.7-200.fc31.x86_64 ОС: Fedora 31

Мой вопрос - это проблема, связанная с Ditto, или эта проблема возникает в основном из-за текущая конфигурация Kubernetes / Minikube на Fedora? (Eclipse Ditto отлично работает для меня с Docker на Ubuntu и с Minikube (+ Virtualbox) на Windows 10) - поэтому я предполагаю, что что-то не так с конфигурацией, о которой журнал уже заявляет.

Любой намек был бы полезен, потому что он работал некоторое время, а затем стручки не возвращались в свое здоровое состояние. Спасибо! - Есть ли лучший / стандартный способ установки Minikube на Fedora или я должен перейти на Ubuntu?

[EDIT: 08/01/2020] Переключен на Ubuntu 18.04 и Eclipse Ditto работает нормально -> Кажется, проблема быть связанным с ОС. Здесь показана одна из известных проблем: docker -ce на Fedora 31 - я использовал упомянутый обходной путь и оказался в этом выпуске.

    minikube logs
==> Docker <==
-- Logs begin at Tue 2020-01-07 15:46:55 UTC, end at Tue 2020-01-07 16:10:22 UTC. --
Jan 07 15:56:20 minikube dockerd[2094]: time="2020-01-07T15:56:20.305385397Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/7230ee36971dba2504a65277e3f007dd8d305fe7f98194e7fb9a0b8ed0cba1d9/shim.sock" debug=false pid=18138
Jan 07 16:09:54 minikube dockerd[2094]: time="2020-01-07T16:09:54.572716478Z" level=info msg="shim reaped" id=670a7905ca992f2db5e1ffb159fd2d461c2223202ca5c6a37128958a4dd366bc
Jan 07 16:09:54 minikube dockerd[2094]: time="2020-01-07T16:09:54.583052816Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jan 07 16:09:54 minikube dockerd[2094]: time="2020-01-07T16:09:54.583199941Z" level=warning msg="670a7905ca992f2db5e1ffb159fd2d461c2223202ca5c6a37128958a4dd366bc cleanup: failed to unmount IPC: umount /var/lib/docker/containers/670a7905ca992f2db5e1ffb159fd2d461c2223202ca5c6a37128958a4dd366bc/mounts/shm, flags: 0x2: no such file or directory"

==> container status <==
CONTAINER           IMAGE               CREATED              STATE               NAME                        ATTEMPT             POD ID
670a7905ca992       dd025cdfe837e       29 seconds ago       Exited              nginx                       37                  fd7b4de4d261f
a0ed862c9c678       9a789e5a74f16       57 seconds ago       Running             concierge                   24                  0fa29aa0c9ecb
301ee5bf9c649       13bf8eb55ce59       About a minute ago   Running             things-search               32                  45379aaaeff0d
1641cfc0256f7       e00ee548beb49       About a minute ago   Running             gateway                     38                  07c2e4ccca714
0785730067b44       8ee57637d7b46       About a minute ago   Exited              things                      20                  8cd8aae50a35a
e8dc171f244d6       29ce4d42b32ad       2 minutes ago        Exited              connectivity                39                  7ed35662e8a62
473d7bb1c5a17       e00ee548beb49       7 minutes ago        Exited              gateway                     37                  07c2e4ccca714
b7202d2025d57       9a789e5a74f16       12 minutes ago       Exited              concierge                   23                  0fa29aa0c9ecb
4b08612bd338f       b8ffefa71d633       20 minutes ago       Running             policies                    19                  3f55bb4089292
0344ba00c473f       13bf8eb55ce59       21 minutes ago       Exited              things-search               31                  45379aaaeff0d
f1be638b29c3d       4689081edb103       21 minutes ago       Running             storage-provisioner         4                   48e5ab82b54ae
4dc299a51a619       4c651c6b8cfe8       21 minutes ago       Running             swagger-ui                  2                   9b85f1c9a63ae
21fd3467c630f       b8ffefa71d633       21 minutes ago       Exited              policies                    18                  3f55bb4089292
5f7d77b969b5e       eb516548c180f       22 minutes ago       Running             coredns                     2                   027f7172d1744
7fefbf987226e       3745fa14a0ed4       22 minutes ago       Running             mongodb                     2                   84dfd23e59e07
bc91f1744e617       eb516548c180f       22 minutes ago       Running             coredns                     2                   fab53198864f7
f886d8648ee9a       eb51a35975256       22 minutes ago       Running             kubernetes-dashboard        2                   12b882a728a14
c8fd459ce905d       3b08661dc379d       22 minutes ago       Running             dashboard-metrics-scraper   2                   ec6fffbcccaed
d0ba87dc92ba3       4689081edb103       22 minutes ago       Exited              storage-provisioner         3                   48e5ab82b54ae
fc2f301546800       89a062da739d3       22 minutes ago       Running             kube-proxy                  2                   64cffcd82f59d
1cfd3a119af0b       d75082f1d1216       22 minutes ago       Running             kube-controller-manager     8                   a90a898f5e2fb
f895e1b3f8a6b       2c4adeb21b4ff       22 minutes ago       Running             etcd                        2                   f27d9d82d9bc3
955e5cd639b42       b0b3c4c404da5       22 minutes ago       Running             kube-scheduler              11                  b1a3f2e67a22f
b191bb1cfdfda       68c3eb07bfc3f       22 minutes ago       Running             kube-apiserver              2                   1de206b3c1e3d
314a185606d77       bd12a212f9dcb       22 minutes ago       Running             kube-addon-manager          2                   7f7246c4b23e6
3715b5c9d65ec       d75082f1d1216       2 hours ago          Exited              kube-controller-manager     7                   f0cc3af288af5
6bdc27e622594       b0b3c4c404da5       2 hours ago          Exited              kube-scheduler              10                  0d7d3b557bb1e
38f75b6b77713       4c651c6b8cfe8       3 hours ago          Exited              swagger-ui                  1                   6c50618b47045
40d0ba5d6b77f       3745fa14a0ed4       3 hours ago          Exited              mongodb                     1                   9c850224884c3
7365be00dcd08       eb516548c180f       3 hours ago          Exited              coredns                     1                   c5a20f793bb99
3be0767544182       eb51a35975256       3 hours ago          Exited              kubernetes-dashboard        1                   0df8da0e7d8dd
fdd59d7490ad3       3b08661dc379d       3 hours ago          Exited              dashboard-metrics-scraper   1                   319c9bab5f097
f98991c713c7a       eb516548c180f       3 hours ago          Exited              coredns                     1                   756f42a5932d0
d8de503467ec0       89a062da739d3       3 hours ago          Exited              kube-proxy                  1                   bd418281f2aae
5011c9f1351e1       68c3eb07bfc3f       3 hours ago          Exited              kube-apiserver              1                   065c6ba9e4313
081548d94955d       2c4adeb21b4ff       3 hours ago          Exited              etcd                        1                   8b48e96af5702
6289adc1f001a       bd12a212f9dcb       3 hours ago          Exited              kube-addon-manager          1                   c2e437cf3c772

==> coredns ["5f7d77b969b5"] <==
.:53
2020-01-07T15:48:25.200Z [INFO] CoreDNS-1.3.1
2020-01-07T15:48:25.200Z [INFO] linux/amd64, go1.11.4, 6b56a9c
CoreDNS-1.3.1
linux/amd64, go1.11.4, 6b56a9c
2020-01-07T15:48:25.200Z [INFO] plugin/reload: Running configuration MD5 = 5d5369fbc12f985709b924e721217843

==> coredns ["7365be00dcd0"] <==
.:53
2020-01-07T13:08:12.695Z [INFO] CoreDNS-1.3.1
2020-01-07T13:08:12.695Z [INFO] linux/amd64, go1.11.4, 6b56a9c
CoreDNS-1.3.1
linux/amd64, go1.11.4, 6b56a9c
2020-01-07T13:08:12.695Z [INFO] plugin/reload: Running configuration MD5 = 5d5369fbc12f985709b924e721217843
[INFO] SIGTERM: Shutting down servers then terminating

==> coredns ["bc91f1744e61"] <==
.:53
2020-01-07T15:48:19.004Z [INFO] CoreDNS-1.3.1
2020-01-07T15:48:19.004Z [INFO] linux/amd64, go1.11.4, 6b56a9c
CoreDNS-1.3.1
linux/amd64, go1.11.4, 6b56a9c
2020-01-07T15:48:19.004Z [INFO] plugin/reload: Running configuration MD5 = 5d5369fbc12f985709b924e721217843

==> coredns ["f98991c713c7"] <==
.:53
2020-01-07T13:08:09.122Z [INFO] CoreDNS-1.3.1
2020-01-07T13:08:09.122Z [INFO] linux/amd64, go1.11.4, 6b56a9c
CoreDNS-1.3.1
linux/amd64, go1.11.4, 6b56a9c
2020-01-07T13:08:09.122Z [INFO] plugin/reload: Running configuration MD5 = 5d5369fbc12f985709b924e721217843
E0107 14:48:39.023358       1 reflector.go:251] github.com/coredns/coredns/plugin/kubernetes/controller.go:317: Failed to watch *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?resourceVersion=11213&timeout=7m6s&timeoutSeconds=426&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
E0107 14:48:39.023358       1 reflector.go:251] github.com/coredns/coredns/plugin/kubernetes/controller.go:317: Failed to watch *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?resourceVersion=11213&timeout=7m6s&timeoutSeconds=426&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
log: exiting because of error: log: cannot create log: open /tmp/coredns.coredns-5c98db65d4-6gq9v.unknownuser.log.ERROR.20200107-144839.1: no such file or directory

==> dmesg <==
[  +0.000000] Total swap = 0kB
[  +0.000000] 610426 pages RAM
[  +0.000001] 0 pages HighMem/MovableOnly
[  +0.000000] 16898 pages reserved
[  +0.000127] Out of memory: Kill process 20291 (java) score 1083 or sacrifice child
[  +0.000059] Killed process 20291 (java) total-vm:4641464kB, anon-rss:200624kB, file-rss:0kB, shmem-rss:0kB
[ +14.922829] coredns invoked oom-killer: gfp_mask=0x6200ca(GFP_HIGHUSER_MOVABLE), nodemask=(null), order=0, oom_score_adj=-998
[  +0.000005] CPU: 1 PID: 5463 Comm: coredns Tainted: G           O      4.19.81 #1
[  +0.000001] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS ?-20190727_073836-buildvm-ppc64le-16.ppc.fedoraproject.org-3.fc31 04/01/2014
[  +0.000000] Call Trace:
[  +0.000006]  dump_stack+0x5c/0x7b
[  +0.000002]  dump_header+0x66/0x28e
[  +0.000002]  oom_kill_process+0x251/0x270
[  +0.000001]  ? oom_badness+0xdc/0x130
[  +0.000001]  out_of_memory+0x10b/0x4b0
[  +0.000002]  __alloc_pages_slowpath+0x9c9/0xd10
[  +0.000002]  __alloc_pages_nodemask+0x27b/0x2a0
[  +0.000001]  filemap_fault+0x1eb/0x5f0
[  +0.000003]  ? alloc_set_pte+0xf3/0x380
[  +0.000002]  ext4_filemap_fault+0x27/0x36
[  +0.000001]  __do_fault+0x2b/0x90
[  +0.000002]  __handle_mm_fault+0x7f1/0xc30
[  +0.000001]  ? __switch_to_asm+0x35/0x70
[  +0.000001]  ? __switch_to_asm+0x41/0x70
[  +0.000002]  handle_mm_fault+0xd7/0x230
[  +0.000002]  __do_page_fault+0x23e/0x4c0
[  +0.000001]  ? async_page_fault+0x8/0x30
[  +0.000001]  async_page_fault+0x1e/0x30
[  +0.000002] RIP: 0033:0x40e4ee
[  +0.000006] Code: Bad RIP value.
[  +0.000001] RSP: 002b:000000c000011cf8 EFLAGS: 00010246
[  +0.000001] RAX: 000000c0000fc160 RBX: 000000c000011dd8 RCX: 000000c000504300
[  +0.000000] RDX: 000000c000300c60 RSI: 00000000017ef200 RDI: 00000000016f1c00
[  +0.000001] RBP: 000000c000011d08 R08: 0000000000000000 R09: fffffffffffffff5
[  +0.000000] R10: 000000c000507fc0 R11: 0000000000000001 R12: 000000c000011f78
[  +0.000001] R13: 000000000000000a R14: 0000000000000000 R15: 000000c000096180
[  +0.000008] Mem-Info:
[  +0.000002] active_anon:459605 inactive_anon:80267 isolated_anon:0
               active_file:98 inactive_file:167 isolated_file:0
               unevictable:5 dirty:1 writeback:0 unstable:0
               slab_reclaimable:11807 slab_unreclaimable:22162
               mapped:22401 shmem:122627 pagetables:2913 bounce:0
               free:3615 free_pcp:1206 free_cma:0
[  +0.000002] Node 0 active_anon:1838420kB inactive_anon:321068kB active_file:392kB inactive_file:668kB unevictable:20kB isolated(anon):0kB isolated(file):0kB mapped:89604kB dirty:4kB writeback:0kB shmem:490508kB writeback_tmp:0kB unstable:0kB all_unreclaimable? no
[  +0.000000] Node 0 DMA free:8664kB min:40kB low:52kB high:64kB active_anon:5212kB inactive_anon:0kB active_file:0kB inactive_file:4kB unevictable:0kB writepending:0kB present:15992kB managed:15908kB mlocked:0kB kernel_stack:52kB pagetables:20kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[  +0.000003] lowmem_reserve[]: 0 2157 2157 2157
[  +0.000002] Node 0 DMA32 free:5796kB min:5920kB low:8128kB high:10336kB active_anon:1833208kB inactive_anon:321068kB active_file:948kB inactive_file:1248kB unevictable:20kB writepending:4kB present:2425712kB managed:2358204kB mlocked:20kB kernel_stack:17196kB pagetables:11632kB bounce:0kB free_pcp:4824kB local_pcp:1248kB free_cma:0kB
[  +0.000002] lowmem_reserve[]: 0 0 0 0
[  +0.000001] Node 0 DMA: 3*4kB (E) 12*8kB (ME) 11*16kB (ME) 12*32kB (UME) 11*64kB (UME) 5*128kB (UME) 4*256kB (UME) 3*512kB (UME) 2*1024kB (UM) 1*2048kB (M) 0*4096kB = 8668kB
[  +0.000008] Node 0 DMA32: 364*4kB (UME) 263*8kB (UME) 92*16kB (UM) 61*32kB (M) 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 6984kB
[  +0.000006] 122843 total pagecache pages
[  +0.000001] 0 pages in swap cache
[  +0.000001] Swap cache stats: add 0, delete 0, find 0/0
[  +0.000000] Free swap  = 0kB
[  +0.000000] Total swap = 0kB
[  +0.000001] 610426 pages RAM
[  +0.000000] 0 pages HighMem/MovableOnly
[  +0.000001] 16898 pages reserved
[  +0.000105] Out of memory: Kill process 1536 (java) score 1084 or sacrifice child
[  +0.000050] Killed process 1536 (java) total-vm:4657780kB, anon-rss:201316kB, file-rss:0kB, shmem-rss:0kB

==> kernel <==
 16:10:22 up 23 min,  0 users,  load average: 10.07, 6.21, 4.89
Linux minikube 4.19.81 #1 SMP Tue Dec 10 16:09:50 PST 2019 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2019.02.7"

==> kube-addon-manager ["314a185606d7"] <==
error: no objects passed to apply
error: no objects passed to apply
deployment.apps/dashboard-metrics-scraper unchanged
deployment.apps/kubernetes-dashboard unchanged
namespace/kubernetes-dashboard unchanged
role.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
serviceaccount/kubernetes-dashboard unchanged
secret/kubernetes-dashboard-certs unchanged
secret/kubernetes-dashboard-csrf unchanged
secret/kubernetes-dashboard-key-holder unchanged
service/kubernetes-dashboard unchanged
service/dashboard-metrics-scraper unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2020-01-07T16:10:11+00:00 ==
INFO: Leader election disabled.
INFO: == Kubernetes addon ensure completed at 2020-01-07T16:10:11+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
configmap/kubernetes-dashboard-settings unchanged
deployment.apps/dashboard-metrics-scraper unchanged
deployment.apps/kubernetes-dashboard unchanged
namespace/kubernetes-dashboard unchanged
role.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
serviceaccount/kubernetes-dashboard unchanged
secret/kubernetes-dashboard-certs unchanged
secret/kubernetes-dashboard-csrf unchanged
secret/kubernetes-dashboard-key-holder unchanged
service/kubernetes-dashboard unchanged
service/dashboard-metrics-scraper unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2020-01-07T16:10:16+00:00 ==
INFO: Leader election disabled.
INFO: == Kubernetes addon ensure completed at 2020-01-07T16:10:17+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
error: no objects passed to apply
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
configmap/kubernetes-dashboard-settings unchanged
deployment.apps/dashboard-metrics-scraper unchanged
deployment.apps/kubernetes-dashboard unchanged
namespace/kubernetes-dashboard unchanged
role.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
serviceaccount/kubernetes-dashboard unchanged
secret/kubernetes-dashboard-certs unchanged
secret/kubernetes-dashboard-csrf unchanged
secret/kubernetes-dashboard-key-holder unchanged
service/kubernetes-dashboard unchanged
service/dashboard-metrics-scraper unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2020-01-07T16:10:21+00:00 ==
INFO: Leader election disabled.
INFO: == Kubernetes addon ensure completed at 2020-01-07T16:10:22+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==

==> kube-addon-manager ["6289adc1f001"] <==
from server for: "/etc/kubernetes/addons/dashboard-ns.yaml": Get https://localhost:8443/api/v1/namespaces/kubernetes-dashboard: dial tcp 127.0.0.1:8443: connect: connection refused
error when retrieving current configuration of:
Resource: "rbac.authorization.k8s.io/v1, Resource=roles", GroupVersionKind: 
from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": Get https://localhost:8443/api/v1/namespaces/kube-system/pods/storage-provisioner: dial tcp 127.0.0.1:8443: connect: connection refused
INFO: == Kubernetes addon reconcile completed at 2020-01-07T14:48:42+00:00 ==
INFO: Leader election disabled.
The connection to the server localhost:8443 was refused - did you specify the right host or port?
INFO: == Kubernetes addon ensure completed at 2020-01-07T14:48:47+00:00 ==
INFO: == Reconciling with deprecated label ==
The connection to the server localhost:8443 was refused - did you specify the right host or port?
INFO: == Reconciling with addon-manager label ==
The connection to the server localhost:8443 was refused - did you specify the right host or port?
INFO: == Kubernetes addon reconcile completed at 2020-01-07T14:48:47+00:00 ==

==> kube-apiserver ["5011c9f1351e"] <==
W0107 14:48:47.316073       1 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...

I0107 15:48:32.205618       1 garbagecollector.go:137] Garbage collector: all resource monitors have synced. Proceeding to collect garbage

==> kube-controller-manager ["3715b5c9d65e"] <==
I0107 14:34:53.688273       1 controllermanager.go:532] Started "bootstrapsigner"
I0107 14:34:53.688423       1 controller_utils.go:1029] Waiting for caches to sync for bootstrap_signer controller

1 Ответ

3 голосов
/ 08 января 2020

Поскольку моей целью было запустить Eclipse Ditto на Minikube, как описано здесь . Я переключился на Ubuntu 18.04 и Virtualbox (как это рекомендует Eclipse Ditto), и Eclipse Ditto работает нормально -> Проблема, похоже, связана с ОС. Здесь показана одна из известных проблем: docker -ce на Fedora 31 - я использовал упомянутый обходной путь и оказался в упомянутой проблеме / вопросе, где кажется, что Minikube не работает стабильно на основе текущие изменения в виртуализации. Смотрите также CGroupsV2 для Fedora 31 . Поэтому мое решение переключить операционную систему решило мою проблему. Поэтому я пришел к выводу, что Fedora в настоящее время не поддерживает комбинацию Minikube на KVM2. Поэтому, если у вас есть возможность выбрать ОС, я рекомендую избегать развертывания Fedora для Minikube.

...