Я борюсь с приведенной ниже ошибкой за последний день или около того. Пожалуйста, помогите с любой идеей.
Вот моя конфигурация:
Установка:
ISTIO_VERSION=istio-1.2.6
GATEWAYS=gateways
kubectl create ns istio-system
helm install $ISTIO_VERSION/install/kubernetes/helm/istio-init \
--name istio-init \
--namespace istio-system
helm install $ISTIO_VERSION/install/kubernetes/helm/istio \
--name istio \
--namespace istio-system \
--values $GATEWAYS/values.yaml
values.yaml istio / install / kubernetes / helm / istio / values-istio-demo.yaml
Узел в порядке:
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
flight dragon-dc789456b-tlbqd 100m (5%) 2 (103%) 128Mi (5%) 128Mi (5%) 7d3h
istio-system istio-citadel-b676479fb-pncst 10m (0%) 0 (0%) 0 (0%) 0 (0%) 8m46s
istio-system istio-egressgateway-9d45cfb7d-l5g6t 10m (0%) 2 (103%) 40Mi (1%) 256Mi (11%) 8m47s
istio-system istio-galley-5694ff64dc-6n57h 10m (0%) 0 (0%) 0 (0%) 0 (0%) 8m47s
istio-system istio-ingressgateway-6849cf65c8-xm2ng 10m (0%) 2 (103%) 40Mi (1%) 1Gi (47%) 8m47s
istio-system istio-pilot-5c5bf5c9bc-xx8zb 20m (1%) 2 (103%) 140Mi (6%) 1Gi (47%) 8m46s
istio-system istio-policy-8fbf8b499-nvwsr 20m (1%) 2 (103%) 140Mi (6%) 1Gi (47%) 8m46s
istio-system istio-sidecar-injector-56cb646f8d-x8gfx 10m (0%) 0 (0%) 0 (0%) 0 (0%) 8m46s
istio-system prometheus-776fdf7479-cwddf 10m (0%) 0 (0%) 0 (0%) 0 (0%) 8m46s
kube-system coredns-745c97fd79-4bflf 100m (5%) 0 (0%) 70Mi (3%) 170Mi (7%) 8d
kube-system coredns-745c97fd79-w7mvr 100m (5%) 0 (0%) 70Mi (3%) 170Mi (7%) 8d
kube-system coredns-autoscaler-657d77ffbf-6qt4q 20m (1%) 0 (0%) 10Mi (0%) 0 (0%) 8d
kube-system kube-proxy-kbltl 100m (5%) 0 (0%) 0 (0%) 0 (0%) 3d9h
kube-system kubernetes-dashboard-6f697bd9f5-4dpm9 100m (5%) 100m (5%) 50Mi (2%) 500Mi (23%) 8d
kube-system metrics-server-58699455bc-szmhm 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8d
kube-system tiller-deploy-57f498469-4gk9m 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8d
kube-system tunnelfront-d754df55-ngwqs 10m (0%) 0 (0%) 64Mi (2%) 0 (0%) 8d
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 630m (32%) 10100m (523%)
memory 752Mi (34%) 4296Mi (199%)
ephemeral-storage 0 (0%) 0 (0%)
attachable-volumes-azure-disk 0 0
Ошибка:
система стерео istio-policy-8fbf8b499-nvwsr 1/2 CrashLoopBackOff 6 6m5s Предупреждение Нездоровый 9m38s (x9 на 10 м) kubelet, aks-nodepool1-20283962-1 Сбой проверки работоспособности: Получить http://10.244.1.237:15014/version: dial tcp 10.244.1.237:15014: соединение: соединение отказано
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 10m default-scheduler Successfully assigned istio-system/istio-policy-8fbf8b499-nvwsr to aks-nodepool1-20283962-1
Warning FailedMount 10m kubelet, aks-nodepool1-20283962-1 MountVolume.SetUp failed for volume "istio-mixer-service-account-token-sdhhh" : couldn't propagate object cache: timed out waiting for the condition
Warning FailedMount 10m kubelet, aks-nodepool1-20283962-1 MountVolume.SetUp failed for volume "policy-adapter-secret" : couldn't propagate object cache: timed out waiting for the condition
Normal Pulled 10m kubelet, aks-nodepool1-20283962-1 Container image "docker.io/istio/proxyv2:1.2.6" already present on machine
Normal Created 10m kubelet, aks-nodepool1-20283962-1 Created container istio-proxy
Normal Started 10m kubelet, aks-nodepool1-20283962-1 Started container istio-proxy
Normal Killing 9m58s (x2 over 10m) kubelet, aks-nodepool1-20283962-1 Container mixer failed liveness probe, will be restarted
Normal Pulled 9m57s (x3 over 10m) kubelet, aks-nodepool1-20283962-1 Container image "docker.io/istio/mixer:1.2.6" already present on machine
Normal Started 9m57s (x3 over 10m) kubelet, aks-nodepool1-20283962-1 Started container mixer
Normal Created 9m57s (x3 over 10m) kubelet, aks-nodepool1-20283962-1 Created container mixer
Warning Unhealthy 9m38s (x9 over 10m) kubelet, aks-nodepool1-20283962-1 Liveness probe failed: Get http://10.244.1.237:15014/version: dial tcp 10.244.1.237:15014: connect: connection refused
Warning BackOff 5m36s (x17 over 8m57s) kubelet, aks-nodepool1-20283962-1 Back-off restarting failed container
Редактировать
Добавление журналов
Смеситель:
2019-09-30T07:11:07.621303Z info mcp Requesting following collections:
2019-09-30T07:11:07.621341Z info mcp [0] istio/policy/v1beta1/rules
2019-09-30T07:11:07.621352Z info mcp [1] istio/config/v1alpha2/templates
2019-09-30T07:11:07.621355Z info mcp [2] istio/config/v1alpha2/adapters
2019-09-30T07:11:07.621358Z info mcp [3] istio/policy/v1beta1/attributemanifests
2019-09-30T07:11:07.621366Z info mcp [4] istio/policy/v1beta1/handlers
2019-09-30T07:11:07.621369Z info mcp [5] istio/policy/v1beta1/instances
2019-09-30T07:11:07.621382Z info parsed scheme: ""
2019-09-30T07:11:07.621401Z info scheme "" not registered, fallback to default scheme
2019-09-30T07:11:07.621415Z info ccResolverWrapper: sending update to cc: {[{istio-galley.istio-system.svc:9901 0 <nil>}] <nil>}
2019-09-30T07:11:07.621420Z info ClientConn switching balancer to "pick_first"
2019-09-30T07:11:07.621491Z info Awaiting for config store sync...
2019-09-30T07:11:07.621555Z info pickfirstBalancer: HandleSubConnStateChange: 0xc000bac360, CONNECTING
2019-09-30T07:11:07.621703Z info mcp (re)trying to establish new MCP sink stream
gc 3 @0.041s 8%: 0.16+9.6+0.004 ms clock, 0.33+8.0/9.2/12+0.008 ms cpu, 6->6->4 MB, 7 MB goal, 6 P
2019-09-30T07:11:07.623952Z info pickfirstBalancer: HandleSubConnStateChange: 0xc000bac360, READY
2019-09-30T07:11:07.632586Z info mcp New MCP sink stream created
2019-09-30T07:11:07.641645Z info Starting runtime config watch...
2019-09-30T07:11:07.641726Z info Built new config.Snapshot: id='0'
2019-09-30T07:11:07.641765Z info Cleaning up handler table, with config ID:-1
2019-09-30T07:11:07.644321Z info Built new config.Snapshot: id='1'
2019-09-30T07:11:07.644378Z info adapters getting kubeconfig from: "" {"adapter": "kubernetesenv.istio-system"}
2019-09-30T07:11:07.644411Z warn Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
2019-09-30T07:11:07.645661Z info adapters Waiting for kubernetes cache sync... {"adapter": "kubernetesenv.istio-system"}
gc 4 @0.110s 4%: 0.073+14+0.004 ms clock, 0.14+1.2/7.9/30+0.008 ms cpu, 8->9->6 MB, 9 MB goal, 6 P
2019-09-30T07:11:07.745792Z info adapters Cache sync successful. {"adapter": "kubernetesenv.istio-system"}
2019-09-30T07:11:07.745868Z info adapters getting kubeconfig from: "" {"adapter": "kubernetesenv.istio-system"}
2019-09-30T07:11:07.745913Z warn Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
2019-09-30T07:11:07.747326Z info Setting up event handlers
2019-09-30T07:11:07.747502Z info Starting Secrets controller
2019-09-30T07:11:07.747720Z info Waiting for informer caches to sync
2019-09-30T07:11:07.748131Z info Cleaning up handler table, with config ID:0
2019-09-30T07:11:07.748700Z info Starting monitor server...
Istio Mixer: root@0798c310-d8da-11e9-89ce-86619cd7d75e-docker.io/istio-1.2.6-b26019f05195188378be9118b0f0fa7613f7ebb4-dirty-Modified
Starting gRPC server on port 9091
2019-09-30T07:11:07.759675Z info ControlZ available at 127.0.0.1:9876
gc 5 @0.676s 1%: 0.045+8.6+0.006 ms clock, 0.090+6.7/8.4/10+0.013 ms cpu, 12->12->6 MB, 13 MB goal, 6 P
GC forced
gc 6 @120.763s 0%: 0.015+8.3+0.004 ms clock, 0.030+0/8.2/6.3+0.009 ms cpu, 12->12->6 MB, 13 MB goal, 6 P
scvg0: inuse: 12, idle: 46, sys: 59, released: 0, consumed: 59 (MB)
GC forced
gc 7 @240.863s 0%: 0.033+8.3+0.005 ms clock, 0.067+0/7.3/12+0.011 ms cpu, 12->12->6 MB, 13 MB goal, 6 P
scvg1: inuse: 13, idle: 45, sys: 59, released: 0, consumed: 59 (MB)
GC forced
gc 8 @360.963s 0%: 0.012+8.7+0.004 ms clock, 0.025+0/8.8/8.7+0.009 ms cpu, 12->12->6 MB, 13 MB goal, 6 P
scvg2: 0 MB released
scvg2: inuse: 14, idle: 44, sys: 59, released: 0, consumed: 58 (MB)
GC forced
gc 9 @481.039s 0%: 0.013+27+0.005 ms clock, 0.027+0/13/57+0.010 ms cpu, 12->12->6 MB, 13 MB goal, 6 P
scvg3: inuse: 15, idle: 43, sys: 59, released: 0, consumed: 58 (MB)
GC forced
gc 10 @601.163s 0%: 0.13+7.7+0.004 ms clock, 0.27+0/7.9/8.3+0.009 ms cpu, 11->11->6 MB, 13 MB goal, 6 P
GC forced
gc 11 @721.263s 0%: 0.016+11+0.004 ms clock, 0.032+0/11/28+0.008 ms cpu, 12->12->6 MB, 13 MB goal, 6 P
scvg4: inuse: 12, idle: 46, sys: 59, released: 0, consumed: 58 (MB)
GC forced
gc 12 @841.363s 0%: 0.048+8.6+0.004 ms clock, 0.097+0/8.7/19+0.009 ms cpu, 10->10->6 MB, 13 MB goal, 6 P
scvg5: inuse: 13, idle: 45, sys: 59, released: 0, consumed: 58 (MB)
GC forced
gc 13 @961.463s 0%: 0.014+8.2+0.004 ms clock, 0.028+0/8.5/6.3+0.009 ms cpu, 12->12->6 MB, 13 MB goal, 6 P
scvg6: 0 MB released
scvg6: inuse: 14, idle: 44, sys: 59, released: 0, consumed: 58 (MB)
GC forced
gc 14 @1081.563s 0%: 0.041+12+0.004 ms clock, 0.083+0/13/10+0.009 ms cpu, 12->12->6 MB, 13 MB goal, 6 P
Istio-Proxy
2019-09-30T07:03:44.608327Z info PilotSAN []string(nil)
2019-09-30T07:03:44.608429Z info Starting proxy agent
2019-09-30T07:03:44.608561Z info watching /etc/certs for changes
2019-09-30T07:03:44.608572Z info Received new config, resetting budget
2019-09-30T07:03:44.608576Z info Reconciling retry (budget 10)
2019-09-30T07:03:44.608600Z info Epoch 0 starting
2019-09-30T07:03:44.608633Z info Envoy command: [-c /etc/istio/proxy/envoy.yaml --restart-epoch 0 --drain-time-s 45 --parent-shutdown-time-s 60 --service-cluster istio-policy --service-node sidecar~10.244.1.237~istio-policy-8fbf8b499-nvwsr.istio-system~istio-system.svc.cluster.local --max-obj-name-len 189 --local-address-ip-version v4 --allow-unknown-fields -l warning --component-log-level misc:error]
2019-09-30T07:04:01.697784Z info watchFileEvents: "/etc/certs/..2019_09_30_07_04_01.483358293": CREATE
2019-09-30T07:04:01.697824Z info watchFileEvents: "/etc/certs/..2019_09_30_07_04_01.483358293": MODIFY|ATTRIB
2019-09-30T07:04:01.697832Z info watchFileEvents: "/etc/certs/key.pem": CREATE
2019-09-30T07:04:01.697897Z info watchFileEvents: "/etc/certs/root-cert.pem": CREATE
2019-09-30T07:04:01.697947Z info watchFileEvents: "/etc/certs/cert-chain.pem": CREATE
2019-09-30T07:04:01.698048Z info watchFileEvents: "/etc/certs/..data_tmp": RENAME
2019-09-30T07:04:01.698181Z info watchFileEvents: "/etc/certs/..data": CREATE
2019-09-30T07:04:01.698200Z info watchFileEvents: "/etc/certs/..2019_09_30_07_03_38.857856170": DELETE
2019-09-30T07:04:11.698075Z info watchFileEvents: notifying
2019-09-30T07:04:11.698499Z info Received new config, resetting budget
2019-09-30T07:04:11.698603Z info Reconciling retry (budget 10)
2019-09-30T07:04:11.698683Z info Epoch 1 starting
2019-09-30T07:04:11.698791Z info Envoy command: [-c /etc/istio/proxy/envoy.yaml --restart-epoch 1 --drain-time-s 45 --parent-shutdown-time-s 60 --service-cluster istio-policy --service-node sidecar~10.244.1.237~istio-policy-8fbf8b499-nvwsr.istio-system~istio-system.svc.cluster.local --max-obj-name-len 189 --local-address-ip-version v4 --allow-unknown-fields -l warning --component-log-level misc:error]
[2019-09-30 07:04:11.717][12][warning][main] [external/envoy/source/server/server.cc:537] shutting down admin due to child startup
[2019-09-30 07:04:11.718][12][warning][main] [external/envoy/source/server/server.cc:546] terminating parent process
[2019-09-30 07:05:11.733][12][warning][main] [external/envoy/source/server/server.cc:426] caught SIGTERM
2019-09-30T07:05:11.736753Z info Epoch 0 exited normally
2019-09-30T07:05:11.737082Z warn Failed to delete config file /etc/istio/proxy/envoy-rev0.json for 0, remove /etc/istio/proxy/envoy-rev0.json: no such file or directory