Kubernetes пытается запустить Rails дважды на одном модуле - PullRequest
0 голосов
/ 03 июля 2018

Я пытаюсь заставить мое приложение dokerized Rails работать на Kubernetes, размещенном на GCP.

kubectl apply -f k8s/webshop.yml
kubectl get pods
NAME                      READY     STATUS             RESTARTS   AGE
shop-lcvxc   2/3       CrashLoopBackOff   23         1h

Пока все хорошо, но приложение не запустится. Дальнейшее расследование показывает, что он пытался запустить Rails (Puma) дважды. Есть идеи, почему это происходит?

Журналы

$ kubectl logs shop-lcvxc -c интернет-магазин

=> Booting Puma
=> Rails 5.2.0 application starting in development
=> Run `rails server -h` for more startup options
initialize PushNotifications
Puma starting in single mode...
* Version 3.11.4 (ruby 2.4.4-p296), codename: Love Song
* Min threads: 5, max threads: 5
* Environment: development
* Listening on tcp://0.0.0.0:3001
Exiting
/usr/local/bundle/gems/puma-3.11.4/lib/puma/binder.rb:270:in `initialize': 
Address already in use - bind(2) for "0.0.0.0" port 3001 (Errno::EADDRINUSE)

$ kubectl logs shop-lcvxc -c app

=> Booting Puma
=> Rails 5.2.0 application starting in development
=> Run `rails server -h` for more startup options
initialize PushNotifications
Puma starting in single mode...
* Version 3.11.4 (ruby 2.4.4-p296), codename: Love Song
* Min threads: 5, max threads: 5
* Environment: development
* Listening on tcp://0.0.0.0:3001
Use Ctrl-C to stop

K8S / webshop.yml

  1 # from https://cloud.google.com/sql/docs/postgres/connect-kubernetes-engine
  2 apiVersion: extensions/v1beta1
  3 kind: Deployment
  4 metadata:
  5   name: webshop
  6   labels:
  7     app: webshop
  8
  9 spec:
 10   template:
 11     metadata:
 12       labels:
 13         app: webshop
 14     spec:
 15       containers:
 16         - name: app
 17           image: eu.gcr.io/company/webshop:latest
 18           ports:
 19             - containerPort: 3000
 20           # The following environment variables will contain the database host,
 21           # user and password to connect to the PostgreSQL instance.
 22           env:
 23             - name: POSTGRES_HOST
 24               value: 127.0.0.1:5432
 25             - name: POSTGRES_DB
 26               value: webshop-staging
 27             # [START cloudsql_secrets]
 28             - name: POSTGRES_USER
 29               valueFrom:
 30                 secretKeyRef:
 31                   name: cloudsql-db-credentials
 32                   key: username
 33             - name: POSTGRES_PASSWORD
 34               valueFrom:
 35                 secretKeyRef:
 36                   name: cloudsql-db-credentials
 37                   key: password
 38             # [END cloudsql_secrets]
 39
 40         # [START proxy_container]
 41         - name: cloudsql-proxy
 42           image: gcr.io/cloudsql-docker/gce-proxy:1.11
 43           command: ["/cloud_sql_proxy",
 44                     "-instances=company:europe-west1:staging=tcp:5432",
 45                     "-credential_file=/secrets/cloudsql/credentials.json"]
 46           volumeMounts:
 47             - name: cloudsql-instance-credentials
 48               mountPath: /secrets/cloudsql
 49               readOnly: true
 50         # [END proxy_container]
 51       # [START volumes]
 52       volumes:
 53         - name: cloudsql-instance-credentials
 54           secret:
 55             secretName: cloudsql-instance-credentials
 56       # [END volumes]
 57

Dockerfile

FROM XX.dkr.ecr.eu-west-1.amazonaws.com/webshop-bundled:1.3

COPY Gemfile* /app/
COPY . /app/

WORKDIR /app

CMD ["/usr/local/bundle/bin/rails", "s", "-b", "0.0.0.0", "-p", "3001"]

^ Это изображение с этого изображения:

Dockerfile-комплект

FROM ruby:2.4-slim-jessie

RUN apt-get update
RUN apt-get install -y libpq-dev libgmp-dev libxml2-dev libxslt-dev
RUN apt-get install -y build-essential patch ruby-dev zlib1g-dev liblzma-dev

COPY Gemfile* /app/
WORKDIR /app
RUN bundle install

ub kubectl description pod shop-lcvxc

Name:           webapp-5845b768f7-tflbv
Namespace:      default
Node:           gke-my-fam-default-pool-12d29bdf-c9t0/10.166.0.3
Start Time:     Fri, 06 Jul 2018 08:18:16 +0200
Labels:         pod-template-hash=1401632493
                run=webapp
Annotations:    kubernetes.io/limit-ranger=LimitRanger plugin set: cpu request for container webapp
Status:         Running
IP:             10.4.0.6
Controlled By:  ReplicaSet/webapp-5845b768f7
Containers:
  webapp:
    Container ID:   docker://889fbc56fc28
    Image:          eu.gcr.io/acme-my-fam/webapp:latest
    Image ID:       docker-pullable://eu.gcr.io/acme-my-fam/webapp@sha256:nnnn
    Port:           8080/TCP
    State:          Running
      Started:      Fri, 06 Jul 2018 08:18:43 +0200
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:        100m
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-bs796 (ro)
Conditions:
  Type           Status
  Initialized    True
  Ready          True
  PodScheduled   True
Volumes:
  default-token-bs796:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-bs796
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason                 Age   From                                                 Message
  ----    ------                 ----  ----                                                 -------
  Normal  Scheduled              10m   default-scheduler                                    Successfully assigned webapp-5845b768f7-tflbv to gke-my-fam-default-pool-12d29bdf-c9t0
  Normal  SuccessfulMountVolume  10m   kubelet, gke-my-fam-default-pool-12d29bdf-c9t0  MountVolume.SetUp succeeded for volume "default-token-bs796"
  Normal  Pulling                10m   kubelet, gke-my-fam-default-pool-12d29bdf-c9t0  pulling image "eu.gcr.io/acme-my-fam/webapp:latest"
  Normal  Pulled                 10m   kubelet, gke-my-fam-default-pool-12d29bdf-c9t0  Successfully pulled image "eu.gcr.io/acme-my-fam/webapp:latest"
  Normal  Created                10m   kubelet, gke-my-fam-default-pool-12d29bdf-c9t0  Created container
  Normal  Started                10m   kubelet, gke-my-fam-default-pool-12d29bdf-c9t0  Started container

1 Ответ

0 голосов
/ 03 июля 2018

Контейнеры совместно используют порт / сеть внутри модуля. Таким образом, у вас не может быть двух процессов, прослушивающих один и тот же порт.

Довольно странно, что в вашем развертывании yaml указывает только контейнер app, однако вы также использовали контейнер shop в своем вопросе?

...