Я пытаюсь развернуть приложение из моего личного docker реестра в Azure AKS pods. У меня есть приложение python, которое регистрирует только некоторые данные:
import time
import logging
logger = logging.getLogger('main')
logger.setLevel(logging.INFO)
handler = logging.StreamHandler(sys.stdout)
handler.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
def main():
logger.info('This is test')
time.sleep(5)
while True:
try:
main()
except Exception:
logger.critical('Something critical.', exc_info=1)
logger.info('Sleep for 5 seconds')
time.sleep(5)
И это мой Dockerfile:
FROM python:3.7-alpine
RUN apk update && apk upgrade
ARG APP_DIR=/app
RUN mkdir -p ${APP_DIR}
WORKDIR ${APP_DIR}
COPY requirements.txt .
RUN \
apk add --no-cache --virtual .build-deps gcc python3-dev musl-dev linux-headers && \
python3 -m pip install -r requirements.txt --no-cache-dir && \
apk --purge del .build-deps
COPY app .
ENTRYPOINT [ "python", "-u", "run.py" ]
Я могу запустить контейнер на локальном компьютере, вот некоторые logs:
docker logs -tf my-container
2020-02-07T10:26:57.939062754Z 2020-02-07 10:26:57,938 - main - INFO - This is test
2020-02-07T10:27:02.944500969Z 2020-02-07 10:27:02,943 - main - INFO - Sleep for 5 seconds
2020-02-07T10:27:07.948643749Z 2020-02-07 10:27:07,948 - main - INFO - This is test
2020-02-07T10:27:12.953683767Z 2020-02-07 10:27:12,953 - main - INFO - Sleep for 5 seconds
2020-02-07T10:27:17.955954057Z 2020-02-07 10:27:17,955 - main - INFO - This is test
2020-02-07T10:27:22.960453835Z 2020-02-07 10:27:22,959 - main - INFO - Sleep for 5 seconds
2020-02-07T10:27:27.964402790Z 2020-02-07 10:27:27,963 - main - INFO - This is test
2020-02-07T10:27:32.968647112Z 2020-02-07 10:27:32,967 - main - INFO - Sleep for 5 seconds
Я пытаюсь развернуть модуль с этим файлом yaml с kubectl apply -f onepod.yaml
:
apiVersion: v1
kind: Pod
metadata:
name: my-container
labels:
platform: xxx
event: yyy
protocol: zzz
spec:
imagePullSecrets:
- name: myregistry
containers:
- name: my-container
image: mypersonalregistry/my-container:test
Модуль создан, но сохраняет статус CrashLoopBackOff
без каких-либо выходных журналов через kubectl logs
команда. Я попытался kubectl describe pod
, но в событиях ничего не помогло:
Name: my-container
Namespace: default
Priority: 0
Node: aks-agentpool-56095163-vmss000000/10.240.0.4
Start Time: Fri, 07 Feb 2020 11:41:48 +0100
Labels: event=yyy
platform=xxx
protocol=zzz
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"labels":{"event":"yyy","platform":"xxx","protocol":"zzz"},"name":"my-container...
Status: Running
IP: 10.244.1.33
IPs: <none>
Containers:
my-container:
Container ID: docker://c497674f86deadca2ef874f8a94361e26c770314e9cff1729bf20b5943d1a700
Image: mypersonalregistry/my-container:test
Image ID: docker-pullable://mypersonalregistry/my-container@sha256:c4208f42fea9a99dcb3b5ad8b53bac5e39bc54b8d89a577f85fec1a94535bc39
Port: <none>
Host Port: <none>
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Fri, 07 Feb 2020 12:28:10 +0100
Finished: Fri, 07 Feb 2020 12:28:10 +0100
Ready: False
Restart Count: 14
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-lv75n (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-lv75n:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-lv75n
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 49m default-scheduler Successfully assigned default/my-container to aks-agentpool-56095163-vmss000000
Normal Pulled 48m (x5 over 49m) kubelet, aks-agentpool-56095163-vmss000000 Container image "mypersonalregistry/my-container:test" already present on machine
Normal Created 48m (x5 over 49m) kubelet, aks-agentpool-56095163-vmss000000 Created container my-container
Normal Started 48m (x5 over 49m) kubelet, aks-agentpool-56095163-vmss000000 Started container my-container
Warning BackOff 4m55s (x210 over 49m) kubelet, aks-agentpool-56095163-vmss000000 Back-off restarting failed container
Как я могу узнать, почему это работает на моем компьютере, но не в кластере kubernetes?