Как реализовать балансировку нагрузки для kube-apiserver с использованием прокси-сервера nignx L7 и аутентификации SSL? - PullRequest
0 голосов
/ 04 марта 2020

Я новичок в kubernetes и недавно пытался реализовать балансировку нагрузки для kube-apiserver, используя прокси-сервер nignx L7 и аутентификацию SSL. Nginx нормально работал в режиме прокси L4, но имел некоторые проблемы в режиме прокси L7. Я выбрал режим L7, потому что хотел проверить механизм повторной передачи nginx с указанным кодом состояния c (например, 429). Я выполнил команду cmd " kubectl apply -f nginx -deployment.yaml ", и файл yaml выглядит следующим образом:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  namespace: kube-system
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 2
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80

Но я обнаружил, что объект развертывания создан , но не нашел ни одного модуля:

➜  ~ kubectl get deploy --all-namespaces
NAMESPACE     NAME               READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   nginx-deployment   2/2     2            2           50m
➜  ~ kubectl get pod --all-namespaces
No resources found

Подробности объекта развертывания описаны ниже:

➜  ~ kubectl describe deploy nginx-deployment -n kube-system
Name:                   nginx-deployment
Namespace:              kube-system
CreationTimestamp:      Wed, 04 Mar 2020 12:42:34 +0000
Labels:                 <none>
Annotations:            kubectl.kubernetes.io/last-applied-configuration:
                          {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"name":"nginx-deployment","namespace":"kube-system"},"spec":{"rep...
Selector:               app=nginx
Replicas:               2 desired | 0 updated | 0 total | 0 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=nginx
  Containers:
   nginx:
    Image:        nginx:1.7.9
    Port:         80/TCP
    Host Port:    0/TCP
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
OldReplicaSets:   <none>
NewReplicaSet:    <none>
Events:           <none>

Когда я посмотрел на события, я не обнаружил связанных с модулями событий. :

➜  ~ kubectl get event --all-namespaces       
NAMESPACE     LAST SEEN   TYPE     REASON                    OBJECT                              MESSAGE
default       6m12s       Normal   Starting                  node/node1                          Starting kubelet.
default       6m12s       Normal   NodeHasSufficientMemory   node/node1                          Node node1 status is now: NodeHasSufficientMemory
default       6m12s       Normal   NodeHasNoDiskPressure     node/node1                          Node node1 status is now: NodeHasNoDiskPressure
default       6m12s       Normal   NodeHasSufficientPID      node/node1                          Node node1 status is now: NodeHasSufficientPID
default       6m12s       Normal   NodeAllocatableEnforced   node/node1                          Updated Node Allocatable limit across pods
default       6m2s        Normal   NodeReady                 node/node1                          Node node1 status is now: NodeReady
default       5m1s        Normal   Starting                  node/node1                          Starting kube-proxy.
default       6m5s        Normal   Starting                  node/node2                          Starting kubelet.
default       6m5s        Normal   NodeHasSufficientMemory   node/node2                          Node node2 status is now: NodeHasSufficientMemory
default       6m5s        Normal   NodeHasNoDiskPressure     node/node2                          Node node2 status is now: NodeHasNoDiskPressure
default       6m5s        Normal   NodeHasSufficientPID      node/node2                          Node node2 status is now: NodeHasSufficientPID
default       6m5s        Normal   NodeAllocatableEnforced   node/node2                          Updated Node Allocatable limit across pods
default       5m55s       Normal   NodeReady                 node/node2                          Node node2 status is now: NodeReady
default       4m58s       Normal   Starting                  node/node2                          Starting kube-proxy.
default       6m30s       Normal   Starting                  node/node3                          Starting kubelet.
default       6m30s       Normal   NodeHasSufficientMemory   node/node3                          Node node3 status is now: NodeHasSufficientMemory
default       6m30s       Normal   NodeHasNoDiskPressure     node/node3                          Node node3 status is now: NodeHasNoDiskPressure
default       6m30s       Normal   NodeHasSufficientPID      node/node3                          Node node3 status is now: NodeHasSufficientPID
default       6m30s       Normal   NodeAllocatableEnforced   node/node3                          Updated Node Allocatable limit across pods
default       6m25s       Normal   NodeReady                 node/node3                          Node node3 status is now: NodeReady
default       6m5s        Normal   Starting                  node/node3                          Starting kubelet.
default       6m5s        Normal   NodeHasSufficientMemory   node/node3                          Node node3 status is now: NodeHasSufficientMemory
default       6m5s        Normal   NodeHasNoDiskPressure     node/node3                          Node node3 status is now: NodeHasNoDiskPressure
default       6m5s        Normal   NodeHasSufficientPID      node/node3                          Node node3 status is now: NodeHasSufficientPID
default       6m5s        Normal   NodeAllocatableEnforced   node/node3                          Updated Node Allocatable limit across pods
default       5m55s       Normal   NodeReady                 node/node3                          Node node3 status is now: NodeReady
default       4m56s       Normal   Starting                  node/node3                          Starting kube-proxy.
kube-system   6m24s       Normal   LeaderElection            endpoints/kube-controller-manager   master1_ba3e0eac-2c8a-483a-acd6-e467ef013967 became leader
kube-system   6m24s       Normal   LeaderElection            lease/kube-controller-manager       master1_ba3e0eac-2c8a-483a-acd6-e467ef013967 became leader
kube-system   5m15s       Normal   LeaderElection            endpoints/kube-controller-manager   master1_1554b52b-29c2-49ca-9166-172b8e22e639 became leader
kube-system   5m15s       Normal   LeaderElection            lease/kube-controller-manager       master1_1554b52b-29c2-49ca-9166-172b8e22e639 became leader
kube-system   2m58s       Normal   LeaderElection            endpoints/kube-controller-manager   master3_4719e0df-a3e0-400b-8e76-f00c97e4276e became leader
kube-system   2m58s       Normal   LeaderElection            lease/kube-controller-manager       master3_4719e0df-a3e0-400b-8e76-f00c97e4276e became leader
kube-system   101s        Normal   LeaderElection            endpoints/kube-controller-manager   master3_10b341c2-c43f-4a0c-b968-df80bd3c6f5f became leader
kube-system   101s        Normal   LeaderElection            lease/kube-controller-manager       master3_10b341c2-c43f-4a0c-b968-df80bd3c6f5f became leader
kube-system   6m19s       Normal   LeaderElection            endpoints/kube-scheduler            master1_de7c551d-c601-41a0-96e7-e3b6c0c91bbb became leader
kube-system   6m19s       Normal   LeaderElection            lease/kube-scheduler                master1_de7c551d-c601-41a0-96e7-e3b6c0c91bbb became leader

Я включил функцию прокси SSL nginx, как настроено ниже:

pstream kubernetes-api-cluster { 
    server 192.168.1.67:6443 weight=100 max_fails=0 fail_timeout=3s;
    server 192.168.1.68:6443 weight=100 max_fails=0 fail_timeout=3s; 
    server 192.168.1.69:6443 weight=100 max_fails=0 fail_timeout=3s; 
} 

server { 
    listen 8443 ssl;
    ssl_certificate /etc/nginx/ssl/master/kube-apiserver.pem;           # kube-apiserver cert
    ssl_certificate_key /etc/nginx/ssl/master/kube-apiserver-key.pem;   # kube-apiserver key
    ssl_trusted_certificate /etc/nginx/ssl/ca.pem;                      # ca.pem
    ssl_prefer_server_ciphers on;
    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    ssl_ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS;
    location / {
        proxy_ssl_certificate /etc/nginx/ssl/admin.pem;                 # kubectl cert
        proxy_ssl_certificate_key /etc/nginx/ssl/admin-key.pem;         # kubectl key
        proxy_ssl_trusted_certificate /etc/nginx/ssl/ca.pem;            # ca.pem
        proxy_pass https://kubernetes-api-cluster;
        proxy_next_upstream error timeout http_500 http_502 http_503 http_504 http_403 http_429 non_idempotent;
        proxy_next_upstream_timeout 3s;
        proxy_next_upstream_tries 5;
        proxy_ignore_client_abort on;
        proxy_set_header Host $host;
        proxy_set_header X-Real-Ip $remote_addr;
        proxy_set_header X-Forwarded-For $remote_addr;
    }
    access_log /var/log/nginx/access.log default;
}

Конфигурация kube-apiserver выглядит следующим образом:

KUBE_APISERVER_OPTS="--logtostderr=true \
--v=7 \
--anonymous-auth=false \
--etcd-servers=https://192.168.1.67:2379,https://192.168.1.68:2379,https://192.168.1.69:2379 \
--etcd-cafile=/opt/etcd/ssl/ca.pem \
--etcd-certfile=/opt/etcd/ssl/server.pem \
--etcd-keyfile=/opt/etcd/ssl/server-key.pem \
--service-cluster-ip-range=10.254.0.0/16 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--bind-address=192.168.1.67 \
--secure-port=6443 \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--token-auth-file=/opt/kubernetes/cfg/token.csv \
--allow-privileged=true \
--tls-cert-file=/opt/kubernetes/ssl/master/kube-apiserver.pem  \
--tls-private-key-file=/opt/kubernetes/ssl/master/kube-apiserver-key.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--advertise-address=192.168.1.67 \
--authorization-mode=RBAC,Node \
--kubelet-https=true \
--enable-bootstrap-token-auth \
--kubelet-certificate-authority=/opt/kubernetes/ssl/ca.pem \
--kubelet-client-key=/opt/kubernetes/ssl/master/kube-apiserver-key.pem \
--kubelet-client-certificate=/opt/kubernetes/ssl/master/kube-apiserver.pem \
--service-node-port-range=30000-50000"

Конфигурация kube-controller-manager выглядит следующим образом:

KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \
--v=7 \
--bind-address=0.0.0.0 \
--cluster-name=kubernetes \
--kubeconfig=/opt/kubernetes/cfg/kube-controller-manager.kubeconfig \
--requestheader-client-ca-file=/opt/kubernetes/ssl/ca.pem \
--authentication-kubeconfig=/opt/kubernetes/cfg/kube-controller-manager.kubeconfig \
--authorization-kubeconfig=/opt/kubernetes/cfg/kube-controller-manager.kubeconfig \
--leader-elect=true \
--service-cluster-ip-range=10.254.0.0/16 \
--controllers=*,bootstrapsigner,tokencleaner \
--tls-cert-file=/opt/kubernetes/ssl/admin.pem \
--tls-private-key-file=/opt/kubernetes/ssl/admin-key.pem \
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  \
--root-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \
--secure-port=10257 \
--use-service-account-credentials=true \
--experimental-cluster-signing-duration=87600h0m0s"

Ниже показана конфигурация kube-scheduler:

KUBE_SCHEDULER_OPTS="--logtostderr=true \
--v=7 \
--bind-address=0.0.0.0 \
--port=10251 \
--secure-port=10259 \
--kubeconfig=/opt/kubernetes/cfg/kube-scheduler.kubeconfig \
--requestheader-client-ca-file=/opt/kubernetes/ssl/ca.pem \
--authentication-kubeconfig=/opt/kubernetes/cfg/kube-scheduler.kubeconfig \
--authorization-kubeconfig=/opt/kubernetes/cfg/kube-scheduler.kubeconfig \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--tls-cert-file=/opt/kubernetes/ssl/admin.pem \
--tls-private-key-file=/opt/kubernetes/ssl/admin-key.pem \
--leader-elect=true"

Часть журнала kubelet следующим образом:

Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.790115   25193 factory.go:170] Factory "raw" can handle container "/user.slice/user-1000.slice", but ignoring.
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.790429   25193 factory.go:170] Factory "raw" can handle container "/system.slice/systemd-update-utmp.service", but ignoring.
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.790489   25193 factory.go:170] Factory "raw" can handle container "/system.slice/systemd-modules-load.service", but ignoring.
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.790537   25193 factory.go:170] Factory "raw" can handle container "/system.slice/systemd-resolved.service", but ignoring.
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.790585   25193 factory.go:170] Factory "raw" can handle container "/system.slice/systemd-tmpfiles-setup.service", but ignoring.
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.790638   25193 factory.go:170] Factory "raw" can handle container "/system.slice/etcd.service", but ignoring.
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.790690   25193 factory.go:170] Factory "raw" can handle container "/system.slice/systemd-tmpfiles-setup-dev.service", but ignoring.
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.790738   25193 factory.go:170] Factory "raw" can handle container "/system.slice/lvm2-lvmetad.service", but ignoring.
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.790786   25193 factory.go:170] Factory "raw" can handle container "/system.slice/keyboard-setup.service", but ignoring.
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.790829   25193 factory.go:170] Factory "raw" can handle container "/user.slice/user-0.slice", but ignoring.
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.790874   25193 factory.go:170] Factory "raw" can handle container "/user.slice/user-0.slice/user@0.service", but ignoring.
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.790924   25193 factory.go:170] Factory "raw" can handle container "/system.slice/kube-scheduler.service", but ignoring.
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.790969   25193 factory.go:170] Factory "systemd" can handle container "/system.slice/sys-fs-fuse-connections.mount", but ignoring.
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.791017   25193 factory.go:170] Factory "raw" can handle container "/system.slice/systemd-random-seed.service", but ignoring.
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.791060   25193 factory.go:170] Factory "raw" can handle container "/system.slice/lxcfs.service", but ignoring.
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.791108   25193 factory.go:170] Factory "raw" can handle container "/system.slice/containerd.service", but ignoring.
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.791154   25193 factory.go:170] Factory "raw" can handle container "/system.slice/cron.service", but ignoring.
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.791200   25193 factory.go:170] Factory "raw" can handle container "/system.slice/flanneld.service", but ignoring.
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.791250   25193 factory.go:170] Factory "raw" can handle container "/system.slice/snapd.socket", but ignoring.
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.791297   25193 factory.go:170] Factory "raw" can handle container "/system.slice/apport.service", but ignoring.
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.791342   25193 factory.go:170] Factory "raw" can handle container "/system.slice/setvtrgb.service", but ignoring.
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.791388   25193 factory.go:170] Factory "raw" can handle container "/user.slice", but ignoring.
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.791433   25193 factory.go:170] Factory "raw" can handle container "/system.slice/ufw.service", but ignoring.
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.791478   25193 factory.go:170] Factory "raw" can handle container "/system.slice/grub-common.service", but ignoring.
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.791523   25193 factory.go:170] Factory "raw" can handle container "/system.slice/kube-proxy.service", but ignoring.
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.791570   25193 factory.go:170] Factory "systemd" can handle container "/system.slice/snap-core-6350.mount", but ignoring.
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.791615   25193 factory.go:170] Factory "raw" can handle container "/system.slice", but ignoring.
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.791660   25193 factory.go:170] Factory "raw" can handle container "/system.slice/accounts-daemon.service", but ignoring.
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.791705   25193 factory.go:170] Factory "raw" can handle container "/system.slice/lxd.socket", but ignoring.
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.791750   25193 factory.go:170] Factory "raw" can handle container "/system.slice/systemd-user-sessions.service", but ignoring.
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.791795   25193 factory.go:170] Factory "raw" can handle container "/system.slice/systemd-sysctl.service", but ignoring.
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.791837   25193 factory.go:170] Factory "raw" can handle container "/system.slice/lxd-containers.service", but ignoring.
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.791889   25193 factory.go:170] Factory "raw" can handle container "/system.slice/polkit.service", but ignoring.
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.791938   25193 factory.go:170] Factory "raw" can handle container "/system.slice/systemd-journal-flush.service", but ignoring.
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.791987   25193 factory.go:170] Factory "raw" can handle container "/system.slice/systemd-networkd.service", but ignoring.
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.792033   25193 factory.go:170] Factory "raw" can handle container "/system.slice/vboxadd.service", but ignoring.
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.792078   25193 factory.go:170] Factory "raw" can handle container "/system.slice/console-setup.service", but ignoring.
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.792124   25193 factory.go:170] Factory "raw" can handle container "/system.slice/rsyslog.service", but ignoring.
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.792173   25193 factory.go:170] Factory "raw" can handle container "/system.slice/kmod-static-nodes.service", but ignoring.
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.792217   25193 factory.go:170] Factory "raw" can handle container "/system.slice/apparmor.service", but ignoring.
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.792294   25193 factory.go:170] Factory "raw" can handle container "/system.slice/irqbalance.service", but ignoring.
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.792374   25193 factory.go:170] Factory "raw" can handle container "/system.slice/systemd-networkd-wait-online.service", but ignoring.
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.792422   25193 factory.go:170] Factory "systemd" can handle container "/system.slice/sys-kernel-debug.mount", but ignoring.
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.792467   25193 factory.go:170] Factory "systemd" can handle container "/system.slice/dev-hugepages.mount", but ignoring.
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.792511   25193 factory.go:170] Factory "raw" can handle container "/system.slice/ssh.service", but ignoring.
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.792561   25193 factory.go:170] Factory "systemd" can handle container "/system.slice/dev-mqueue.mount", but ignoring.
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.792604   25193 factory.go:170] Factory "raw" can handle container "/system.slice/docker.socket", but ignoring.
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.792645   25193 factory.go:170] Factory "raw" can handle container "/system.slice/atd.service", but ignoring.
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.792689   25193 factory.go:170] Factory "raw" can handle container "/system.slice/unattended-upgrades.service", but ignoring.
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.792734   25193 factory.go:170] Factory "systemd" can handle container "/system.slice/sys-kernel-config.mount", but ignoring.
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.792779   25193 factory.go:170] Factory "raw" can handle container "/system.slice/systemd-journald.service", but ignoring.
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.792853   25193 factory.go:170] Factory "raw" can handle container "/system.slice/systemd-logind.service", but ignoring.
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.792918   25193 factory.go:170] Factory "raw" can handle container "/system.slice/snapd.service", but ignoring.
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.792963   25193 factory.go:170] Factory "raw" can handle container "/system.slice/kube-apiserver.service", but ignoring.
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.793008   25193 factory.go:170] Factory "raw" can handle container "/system.slice/systemd-remount-fs.service", but ignoring.
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.793052   25193 factory.go:170] Factory "raw" can handle container "/system.slice/cloud-init-local.service", but ignoring.
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.793097   25193 factory.go:170] Factory "raw" can handle container "/system.slice/systemd-udev-trigger.service", but ignoring.
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.793139   25193 factory.go:170] Factory "raw" can handle container "/system.slice/system-getty.slice", but ignoring.
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.793188   25193 factory.go:170] Factory "raw" can handle container "/system.slice/vboxadd-service.service", but ignoring.
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.793238   25193 factory.go:170] Factory "raw" can handle container "/system.slice/lvm2-monitor.service", but ignoring.
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.793283   25193 factory.go:170] Factory "raw" can handle container "/system.slice/systemd-udevd.service", but ignoring.
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.793328   25193 factory.go:170] Factory "raw" can handle container "/system.slice/ebtables.service", but ignoring.
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.793401   25193 factory.go:170] Factory "raw" can handle container "/system.slice/dbus.service", but ignoring.
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.793574   25193 factory.go:170] Factory "raw" can handle container "/user.slice/user-0.slice/session-615.scope", but ignoring.
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.793631   25193 factory.go:170] Factory "raw" can handle container "/system.slice/kube-controller-manager.service", but ignoring.
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.793685   25193 factory.go:170] Factory "raw" can handle container "/system.slice/cloud-final.service", but ignoring.
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.793733   25193 factory.go:170] Factory "raw" can handle container "/system.slice/blk-availability.service", but ignoring.
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.793781   25193 factory.go:170] Factory "raw" can handle container "/system.slice/snapd.seeded.service", but ignoring.
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.793834   25193 factory.go:170] Factory "systemd" can handle container "/system.slice/snap-core-8689.mount", but ignoring.
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.793882   25193 factory.go:170] Factory "raw" can handle container "/system.slice/cloud-config.service", but ignoring.
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.793965   25193 factory.go:170] Factory "raw" can handle container "/system.slice/cloud-init.service", but ignoring.
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.794032   25193 factory.go:170] Factory "raw" can handle container "/system.slice/networkd-dispatcher.service", but ignoring.
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.794085   25193 factory.go:170] Factory "raw" can handle container "/system.slice/swap.img.swap", but ignoring.
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.971487   25193 eviction_manager.go:229] eviction manager: synchronize housekeeping
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.984592   25193 helpers.go:781] eviction manager: observations: signal=nodefs.inodesFree, available: 2933345, capacity: 3276800, time: 2020-03-04 15:31:02.972779138 +0000 UTC m=+183.120598420
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.984788   25193 helpers.go:781] eviction manager: observations: signal=imagefs.available, available: 28144320Ki, capacity: 51340768Ki, time: 2020-03-04 15:31:02.972779138 +0000 UTC m=+183.120598420
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.984891   25193 helpers.go:781] eviction manager: observations: signal=imagefs.inodesFree, available: 2933345, capacity: 3276800, time: 2020-03-04 15:31:02.972779138 +0000 UTC m=+183.120598420
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.984986   25193 helpers.go:781] eviction manager: observations: signal=pid.available, available: 32388, capacity: 32Ki, time: 2020-03-04 15:31:02.977127121 +0000 UTC m=+183.124946382
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.985193   25193 helpers.go:781] eviction manager: observations: signal=memory.available, available: 779176Ki, capacity: 2040816Ki, time: 2020-03-04 15:31:02.972779138 +0000 UTC m=+183.120598420
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.985310   25193 helpers.go:781] eviction manager: observations: signal=allocatableMemory.available, available: 2040752Ki, capacity: 2040816Ki, time: 2020-03-04 15:31:02.981299125 +0000 UTC m=+183.129118432
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.985451   25193 helpers.go:781] eviction manager: observations: signal=nodefs.available, available: 28144320Ki, capacity: 51340768Ki, time: 2020-03-04 15:31:02.972779138 +0000 UTC m=+183.120598420
Mar 04 15:31:02 master2 kubelet[25193]: I0304 15:31:02.985520   25193 eviction_manager.go:320] eviction manager: no resources are starved
Mar 04 15:31:08 master2 kubelet[25193]: I0304 15:31:08.798836   25193 setters.go:77] Using node IP: "192.168.1.68"

Где kube-apiserver.pem и kube-apiserver-key.pem - это сертификаты и закрытые ключи, настроенные для nginx прослушивающих портов, admin.pem - это сертификат kubectl, а admin-key - это закрытый ключ kubectl. И они также применяются к связанной с ssl_proxy конфигурации nginx. Компонент kubectl может использоваться нормально, а созданный объект развертывания может возвращать 200 нормально, но модуль фактически не создается. Буду очень признателен тем, кто поможет мне решить эту проблему.

Добро пожаловать на сайт PullRequest, где вы можете задавать вопросы и получать ответы от других членов сообщества.
...