У меня есть несколько приложений .net Core, которые отключаются без видимой причины. Похоже, что это происходит с момента проведения проверок работоспособности, но я не могу видеть команды убийства в kubernetes.
cmd
kubectl describe pod mypod
output (количество повторных запусков настолько велико из-за ежедневного ежедневного шутауна; сценическое окружение)
Name: mypod
...
Status: Running
...
Controlled By: ReplicaSet/mypod-deployment-6dbb6bcb65
Containers:
myservice:
State: Running
Started: Fri, 01 Nov 2019 09:59:40 +0100
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Fri, 01 Nov 2019 07:19:07 +0100
Finished: Fri, 01 Nov 2019 09:59:37 +0100
Ready: True
Restart Count: 19
Liveness: http-get http://:80/liveness delay=10s timeout=1s period=5s #success=1 #failure=10
Readiness: http-get http://:80/hc delay=10s timeout=1s period=5s #success=1 #failure=10
...
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Unhealthy 18m (x103 over 3h29m) kubelet, aks-agentpool-40946522-0 Readiness probe failed: Get http://10.244.0.146:80/hc: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Warning Unhealthy 18m (x29 over 122m) kubelet, aks-agentpool-40946522-0 Liveness probe failed: Get http://10.244.0.146:80/liveness: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Это журналы стручков
cmd
kubectl logs mypod --previous
выход
Hosting environment: Production
Content root path: /app
Now listening on: http://[::]:80
Application started. Press Ctrl+C to shut down.
Application is shutting down...
соответствующий журнал из лазурного
cmd
kubectl get events
output (здесь я пропускаю событие убийства. Я предполагаю, что модуль не был перезапущен, вызванный несколькиминеудачные проверки работоспособности)
LAST SEEN TYPE REASON OBJECT MESSAGE
39m Normal NodeHasSufficientDisk node/aks-agentpool-40946522-0 Node aks-agentpool-40946522-0 status is now: NodeHasSufficientDisk
39m Normal NodeHasSufficientMemory node/aks-agentpool-40946522-0 Node aks-agentpool-40946522-0 status is now: NodeHasSufficientMemory
39m Normal NodeHasNoDiskPressure node/aks-agentpool-40946522-0 Node aks-agentpool-40946522-0 status is now: NodeHasNoDiskPressure
39m Normal NodeReady node/aks-agentpool-40946522-0 Node aks-agentpool-40946522-0 status is now: NodeReady
39m Normal CREATE ingress/my-ingress Ingress default/ebizsuite-ingress
39m Normal CREATE ingress/my-ingress Ingress default/ebizsuite-ingress
7m2s Warning Unhealthy pod/otherpod2 Readiness probe failed: Get http://10.244.0.158:80/hc: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
7m1s Warning Unhealthy pod/otherpod2 Liveness probe failed: Get http://10.244.0.158:80/liveness: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
40m Warning Unhealthy pod/otherpod2 Liveness probe failed: Get http://10.244.0.158:80/liveness: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
44m Warning Unhealthy pod/otherpod1 Liveness probe failed: Get http://10.244.0.151:80/liveness: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
5m35s Warning Unhealthy pod/otherpod1 Readiness probe failed: Get http://10.244.0.151:80/hc: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
40m Warning Unhealthy pod/otherpod1 Readiness probe failed: Get http://10.244.0.151:80/hc: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
8m8s Warning Unhealthy pod/mypod Readiness probe failed: Get http://10.244.0.146:80/hc: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
8m7s Warning Unhealthy pod/mypod Liveness probe failed: Get http://10.244.0.146:80/liveness: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
0s Warning Unhealthy pod/otherpod1 Readiness probe failed: Get http://10.244.0.151:80/hc: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
свертывание из другого модуля (я выполнял это в очень длинном цикле каждую секунду и никогда не получал ничего, кроме 200 OK)
kubectl exec -t otherpod1 -- curl --fail http://10.244.0.146:80/hc
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
{"status":"Healthy","totalDuration":"00:00:00.0647250","entries":{"self":{"data":{},"duration":"00:00:00.0000012","status":"Healthy"},"warmup":{"data":{},"duration":"00:00:00.0000007","status":"Healthy"},"TimeDB-check":{"data":{},"duration":"00:00:00.0341533","status":"Healthy"},"time-blob-storage-check":{"data":{},"duration":"00:00:00.0108192","status":"Healthy"},"time-rabbitmqbus-check":{"data":{},"duration":"00:00:00.0646841","status":"Healthy"}}}100 454 0 454 0 0 6579 0 --:--:-- --:--:-- --:--:-- 6579
завиток
kubectl exec -t otherpod1 -- curl --fail http://10.244.0.146:80/liveness
Healthy % Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 7 0 7 0 0 7000 0 --:--:-- --:--:-- --:--:-- 7000