ошибки контроллера и входной прокладки - PullRequest
0 голосов
/ 15 ноября 2018

Я пытаюсь заставить это работать на GKE, но безуспешно, к сожалению, и я не могу понять, почему.Я использую веб-сокеты и все работает как надо, DNS указывает на правильный IP, но проблема в том, что я не могу заставить работать wss / ssl.

Моя архитектура состоит из службы LoadBalancer ипростой модуль Ubuntu с сервером nodejs.Я добавил дополнительный сервис для каждого модуля.Вот конфиги: LB

yaml
apiVersion: v1
kind: Service
metadata:
  name: myapp-lb
spec:
  type: LoadBalancer
  loadBalancerIP: XXX.XXX.XXX.XXX
  ports:
  - port: 443
    name: https
  - port: 80
    name: http
  selector:
    app: myapp-svc

Сервис

yaml
apiVersion: v1
kind: Service
metadata:
  name: myapp-svc
  labels:
    app: myapp
spec:
  ports:
  - name: http
    port: 80
    targetPort: 80
  - name: https
    port: 443
    targetPort: 443 
  - name: rtsp
    port: 554
    targetPort: 554   
  selector:
    app: myapp

Приложение

yaml
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: myapp
  labels:
    app: myapp
spec:
  template:
    metadata:
      labels:
        name: myapp
        app: myapp
    spec:
      terminationGracePeriodSeconds: 60
      containers:
      - image: gcr.io/xx-xx-124315/myapp:210ac195d-dirty
        name: myapp
        ports:
        - containerPort: 443
          hostPort: 443
        - containerPort: 80
          hostPort: 80
        - containerPort: 554
          hostPort: 554
        env:
          - name: POD_NAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: POD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
        volumeMounts:
          - name: certdata
            mountPath: "/etc/certdata"
            readOnly: true        
        securityContext:
          capabilities:
              add:
              - ALL
      volumes:
        - name: certdata
          secret:
              secretName: myapp-tls

Я смонтировал том со значением «myapp-tls», так как для создания сервера wss требуются файлы .key & .cert (https.createServer({key: fs.readFileSync('keys/server.key'), cert: fs.readFileSync('keys/server.crt')}, this.app))

Я установил cert-manager в ns по умолчанию, используя статические yamls, crd

yaml
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: certificates.certmanager.k8s.io
  labels:
    app: cert-manager
spec:
  group: certmanager.k8s.io
  version: v1alpha1
  scope: Namespaced
  names:
    kind: Certificate
    plural: certificates
    shortnames:

- сертификаты

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: clusterissuers.certmanager.k8s.io
  labels:
    app: cert-manager
spec:
  group: certmanager.k8s.io
  version: v1alpha1
  scope: Cluster
  names:
    kind: ClusterIssuer
    plural: clusterissuers

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: issuers.certmanager.k8s.io
  labels:
    app: cert-manager
spec:
  group: certmanager.k8s.io
  version: v1alpha1
  scope: Namespaced
  names:
    kind: Issuer
    plural: issuers

развертывание

yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: cert-manager
  namespace: default
  labels:
    app: cert-manager
spec:
  replicas: 1
  selector:
    matchLabels:
      app: cert-manager
  template:
    metadata:
      labels:
        app: cert-manager
    spec:
      serviceAccountName: default
      containers:
      - name: mgr
        image: quay.io/jetstack/cert-manager-controller:v0.2.3
        imagePullPolicy: IfNotPresent
        args:
          - --cluster-resource-namespace=$(POD_NAMESPACE)
        env:
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        resources:
          requests:
            cpu: 10m
            memory: 32Mi
      - name: shim
        image: quay.io/jetstack/cert-manager-ingress-shim:v0.2.3
        imagePullPolicy: IfNotPresent
        args:
          - --default-issuer-name=$(POD_NAMESPACE)/ca-issuer
          - --default-issuer-kind=ClusterIssuer
        resources:
          requests:
            cpu: 10m
            memory: 32Mi

Iсоздал секрет для letsencrypt

- openssl genrsa -out ca.key 2048
- openssl req -x509 -new -nodes -key ca.key -subj "/CN=${myapp-tls}" -days 3650 -
˓→reqexts v3_req -extensions v3_ca -out ca.crt

добавил его в мой clusterIssuer

yaml
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
  name: ca-issuer
  namespace: default
spec:
  acme:
    server: https://acme-v01.api.letsencrypt.org/directory
    email: ssl-certificate@mydomain.com
    privateKeySecretRef:
      name:  ca-key-pair
    http01: {}

и создал сертификат

yaml
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
  name: myapp
  namespace: default
spec:
  secretName: myapp-tls
  issuerRef:
    name: ca-issuer
    kind: ClusterIssuer
  commonName: sub.myapp.com
  dnsNames:
  - sub.myapp.com
  acme:
    config:
    - http01:
        ingressClass: nginx
      domains:
      - sub.myapp.com

Теперь, если я регистрирую модуль cert-manager mgr, я получаю этот вывод каждые 20 минут:

09:46:03.159052       1 sync.go:242] Error preparing issuer for certificate: error waiting for key to be available for domain "sub.myapp.com": context deadline exceeded
E1115 09:46:03.169486       1 sync.go:190] [default/myapp] Error getting certificate 'camera-streaming-service-tls': secret "myapp-tls" not found
E1115 09:46:03.172931       1 controller.go:196] certificates controller: Re-queuing item "default/myapp" due to error processing: error waiting for key to be available for domain "sub.myapp.com": context deadline exceeded
I1115 09:46:03.172995       1 controller.go:187] certificates controller: syncing item 'default/myapp
I1115 09:46:03.181199       1 sync.go:107] Error checking existing TLS certificate: secret "myapp-tls" not found
I1115 09:46:03.181254       1 sync.go:238] Preparing certificate with issuer
I1115 09:46:03.207899       1 prepare.go:239] Compare "" with "https://acme-v01.api.letsencrypt.org/acme/reg/45778309"
`

Same goes every 20mins for **cert-manager ingress-shim** pod:
`1 controller.go:147] ingress-shim controller: syncing item 'default/cm-myapp-ugvem'
E1115 09:46:03.135397       1 controller.go:177] ingress 'default/cm-myapp-ugvem' in work queue no longer exists
I1115 09:46:03.135410       1 controller.go:161] ingress-shim controller: Finished processing work item "default/cm-myapp-ugvem"
I1115 09:46:04.058887       1 controller.go:147] ingress-shim controller: syncing item 'default/cm-myapp-zoulq'
I1115 09:46:04.059135       1 sync.go:41] Not syncing ingress default/cm-myapp-zoulq as it does not contain necessary annotations
I1115 09:46:04.059248       1 controller.go:161] ingress-shim controller: Finished processing work item "default/cm-myapp-zoulq"
`

I noticed that every 20mins a new ingress, service, and pod get created, namely:
ingress` cm-myapp-zoulq   sub.myapp.com    80        11m`
svc` cm-myapp-kvzup   NodePort       10.39.242.227   <none>           8089:31006/TCP               11m`
pod` cm-myapp-bbeda   1/1       Running   0          10m`

Afaik i should be getting a myapp-tls secret with .key and .ca so i can add those to my nodejs server setup, and once i fix those errors on ingress-shim and mngr, everything should work.
I can not figure out what seems to be the problem, please help, and tnx.

**Environment details:**:
- Kubernetes version (e.g. v1.10.2):
kubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.7", GitCommit:"dd5e1a2978fd0b97d9b78e1564398aeea7e7fe92", GitTreeState:"clean", BuildDate:"2018-04-19T00:05:56Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9+", GitVersion:"v1.9.7-gke.7", GitCommit:"9b635efce81582e1da13b35a7aa539c0ccb32987", GitTreeState:"clean", BuildDate:"2018-11-02T23:07:38Z", GoVersion:"go1.9.3b4", Compiler:"gc", Platform:"linux/amd64"}

- Cloud-provider/provisioner (e.g. GKE, kops AWS, etc): GKE

/kind bug
...