У меня в настоящее время есть Deployment
и Service
, работающие нормально на GKE. Моя проблема в том, что я хотел бы «привязать» свой внешний IP: порт к доменному имени (в OVH), например:
http://www.example.com/api/grpc -> 12.345.67.89 : 8080
http://www.example.com/api/rest -> 12.345.67.89:8081
После долгих поисков я наконец обнаружил, что Ingress может быть моим решением. Затем я обновил свой yaml, чтобы объединить три из Deployment
, Service
, Ingress
.
Вот мой yaml:
# Copyright 2016 Google Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License
# Use this file to deploy the container for the grpc-bookstore sample
# and the container for the Extensible Service Proxy (ESP) to
# Google Kubernetes Engine (GKE).
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: myservice
labels:
app: myservice
spec:
replicas: 1
selector:
matchLabels:
app: myservice
template:
metadata:
labels:
app: myservice
spec:
containers:
- name: myservice
image: gcr.io/<project_id>/myservice:latest
imagePullPolicy: Always
ports:
- containerPort: 8080
- containerPort: 8081
---
apiVersion: v1
kind: Service
metadata:
name: myservice
spec:
type: NodePort
selector:
app: myservice
ports:
# Port that accepts gRPC and JSON/HTTP2 requests over HTTP.
- port: 8080
targetPort: 8080
protocol: TCP
name: grpc
# Port that accepts gRPC and JSON/HTTP2 requests over HTTP.
- port: 8081
targetPort: 8081
protocol: TCP
name: rest
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myservice-ingress
spec:
rules:
- http:
paths:
- path: /grpc
backend:
serviceName: myservice
servicePort: 8080
- path: /rest
backend:
serviceName: myservice
servicePort: 8081
Затем я пытаюсь запустить простой запрос к моему REST API, используя: http://www.example.com/api/rest/test
с телом POST json, содержащим мое имя. API должен вернуть Hello %s
, но нет, я получаю либо:
- бэкэнд по умолчанию - 404
- 502 Ошибка сервера ( Сервер обнаружил временную ошибку и не смог завершить Ваш запрос. Повторите попытку через 30 секунд. )
Я понятия не имею, в чем может быть проблема, поскольку я следовал документации Google
Редактировать
Я поставил http://www.example.com/api/rest
в моем примере, но следующее не работает:
http://www.example.com/rest
http://12.345.67.89/rest
Обновление (19 марта 2020 г.)
Итак, я могу двигаться вперед, теперь моя служба (которая была НЕЗДОРОВАЯ) ЗДОРОВАЯ, я могу подключиться к ней, запустить CURL на моей конечной точке готовностиProbe / livenessProbe и получите 200 OK.
Обновленная версия моего yaml выглядит следующим образом:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: myservice
namespace: default
spec:
replicas: 1
selector:
matchLabels:
run: myservice
template:
metadata:
labels:
run: myservice
spec:
containers:
- name: myservice
image: gcr.io/<project_id>/myservice:latest
imagePullPolicy: Always
ports:
- containerPort: 8080
- containerPort: 8081
readinessProbe:
httpGet:
path: /health_check
port: 8081
livenessProbe:
httpGet:
path: /health_check
port: 8081
---
apiVersion: v1
kind: Service
metadata:
name: myservice
namespace: default
spec:
type: NodePort
selector:
run: myservice
ports:
# Port that accepts gRPC and JSON/HTTP2 requests over HTTP.
- port: 8080
targetPort: 8080
protocol: TCP
name: grpc
# Port that accepts gRPC and JSON/HTTP2 requests over HTTP.
- port: 8081
targetPort: 8081
protocol: TCP
name: rest
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myservice-ingress
spec:
backend:
serviceName: myservice
servicePort: 8081
kubectl описать pods
MacBook-Pro-de-Emixam23:~ emixam23$ kubectl describe pods
Name: myservice-c57d64669-phrzr
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: gke-cluster-kuberne-default-pool-8b65afeb-qgcm/10.166.0.31
Start Time: Thu, 19 Mar 2020 11:36:35 -0400
Labels: pod-template-hash=c57d64669
run=myservice
Annotations: kubernetes.io/limit-ranger: LimitRanger plugin set: cpu request for container myservice
Status: Running
IP: 10.4.2.28
Controlled By: ReplicaSet/myservice-c57d64669
Containers:
myservice:
Container ID: docker://3f9df91ec4e2631d85e0becdb8d1be64bf97fadb5a5b7049c7391eb8cfdf3eee
Image: gcr.io/<project_id>/myservice:latest
Image ID: docker-pullable://gcr.io/<project_id>/myservice@sha256:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Ports: 8080/TCP, 8081/TCP
Host Ports: 0/TCP, 0/TCP
State: Running
Started: Thu, 19 Mar 2020 11:36:40 -0400
Ready: True
Restart Count: 0
Requests:
cpu: 100m
Liveness: http-get http://:8081/health_check delay=0s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8081/health_check delay=0s timeout=1s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-6cppb (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-6cppb:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-6cppb
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 8m36s default-scheduler Successfully assigned default/myservice-c57d64669-phrzr to gke-cluster-kuberne-default-pool-8b65afeb-qgcm
Normal Pulling 8m35s kubelet, gke-cluster-kuberne-default-pool-8b65afeb-qgcm Pulling image "gcr.io/<project_id>/myservice:latest"
Normal Pulled 8m32s kubelet, gke-cluster-kuberne-default-pool-8b65afeb-qgcm Successfully pulled image "gcr.io/<project_id>/myservice:latest"
Normal Created 8m31s kubelet, gke-cluster-kuberne-default-pool-8b65afeb-qgcm Created container myservice
Normal Started 8m31s kubelet, gke-cluster-kuberne-default-pool-8b65afeb-qgcm Started container myservice
kubectl description ingress myservice-ingress
MacBook-Pro-de-Emixam23:~ emixam23$ kubectl describe ingress myservice-ingress
Name: myservice-ingress
Namespace: default
Address: XX.XXX.XXX.XXX
Default backend: myservice:8081 (10.4.2.28:8081)
Rules:
Host Path Backends
---- ---- --------
* * myservice:8081 (10.4.2.28:8081)
Annotations:
ingress.kubernetes.io/backends: {"k8s-be-31336--d1838223483f8e56":"HEALTHY"}
ingress.kubernetes.io/forwarding-rule: k8s-fw-default-myservice-ingress--d1838223483f8e0
ingress.kubernetes.io/target-proxy: k8s-tp-default-myservice-ingress--d1838223483f8e0
ingress.kubernetes.io/url-map: k8s-um-default-myservice-ingress--d1838223483f8e0
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{},"name":"myservice-ingress","namespace":"default"},"spec":{"backend":{"serviceName":"myservice","servicePort":8081}}}
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ADD 11m loadbalancer-controller default/myservice-ingress
Normal CREATE 11m loadbalancer-controller ip: XX.XXX.XXX.XXX
Я не вижу ошибок, но продолжаю получать 404, когда я пытаюсь нажать XX.XXX.XXX.XXX / health_check
Обновление (19 марта 2020 г.) - 2
Мой вход теперь выглядит следующим образом:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myservice-ingress
spec:
rules:
- http:
paths:
- path: /grpc/*
backend:
serviceName: myservice
servicePort: 8080
- path: /rest/*
backend:
serviceName: myservice
servicePort: 8081
Конечная точка /rest/*
возвращает 404, gRP C еще не проверено. Что касается здоровья, сейчас у меня есть 3 службы, и одна из них не здорова, я не знаю почему:
MacBook-Pro-de-Emixam23:~ emixam23$ kubectl describe ingress myservice-ingress
Name: myservice-ingress
Namespace: default
Address: XX.XXX.XXX.XXX
Default backend: default-http-backend:80 (10.4.2.7:8080)
Rules:
Host Path Backends
---- ---- --------
*
/grpc/* myservice:8080 (10.4.1.23:8080)
/rest/* myservice:8081 (10.4.1.23:8081)
Annotations:
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{},"name":"myservice-ingress","namespace":"default"},"spec":{"rules":[{"http":{"paths":[{"backend":{"serviceName":"myservice","servicePort":8080},"path":"/grpc/*"},{"backend":{"serviceName":"myservice","servicePort":8081},"path":"/rest/*"}]}}]}}
ingress.kubernetes.io/backends: {"k8s-be-30181--d1838223483f8e56":"UNHEALTHY","k8s-be-30368--d1838223483f8e56":"HEALTHY","k8s-be-31613--d1838223483f8e56":"HEALTHY"}
ingress.kubernetes.io/forwarding-rule: k8s-fw-default-myservice-ingress--d1838223483f8e0
ingress.kubernetes.io/target-proxy: k8s-tp-default-myservice-ingress--d1838223483f8e0
ingress.kubernetes.io/url-map: k8s-um-default-myservice-ingress--d1838223483f8e0
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ADD 14m loadbalancer-controller default/myservice-ingress
Normal CREATE 13m loadbalancer-controller ip: XX.XXX.XXX.XXX
Кроме того: https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer#step_6_optional_serve_multiple_applications_on_a_load_balancer