Справочная информация: 2 x AWS Стек EKS Kubernetes, версия 1.14, версия платформы eks.9
Я следую этому руководству для настройки "Общая плоскость управления (для нескольких сетей ) ", получил эти ошибки при выполнении« Настройка кластера 2 ».
Какие-нибудь советы? Спасибо!
$ istioctl manifest apply --context=$CTX_CLUSTER2 \
--set profile=remote \
--set values.gateways.enabled=true \
--set values.security.selfSigned=false \
--set values.global.createRemoteSvcEndpoints=true \
--set values.global.remotePilotCreateSvcEndpoint=true \
--set values.global.remotePilotAddress=${LOCAL_GW_ADDR} \
--set values.global.remotePolicyAddress=${LOCAL_GW_ADDR} \
--set values.global.remoteTelemetryAddress=${LOCAL_GW_ADDR} \
--set values.gateways.istio-ingressgateway.env.ISTIO_META_NETWORK="network2" \
--set values.global.network="network2" \
--set values.global.multiCluster.clusterName=${CLUSTER_NAME}
- Applying manifest for component Base...
2020-03-13T14:11:19.644688Z error installer error running kubectl: exit status 1
✘ Finished applying manifest for component Base.
- Applying manifest for component Pilot...
✔ Finished applying manifest for component Pilot.
2020-03-13T14:11:29.235035Z error installer Failed to wait for resource: resources not ready after 10m0s: services "istio-pilot" not found
- Applying manifest for component IngressGateways...
✔ Finished applying manifest for component IngressGateways.
Component Base - manifest apply returned the following errors:
Error: error running kubectl: exit status 1
✘ Errors were logged during apply operation. Please check component installation logs above.
Error: failed to apply manifests: errors were logged during apply operation
PS за то, как я создал 2 x EKS кластера, т.е. cluster1 и cluster2
$ eksctl create cluster \
–name cluster1 \
–region us-east-1 \
–nodegroup-name standard-workers \
–node-type t3.medium \
–nodes 2 \
–nodes-min 1 \
–nodes-max 3 \
–ssh-access \
–ssh-public-key eks \
–managed