Я настроил свой кластер с помощью kubeadm. На последнем шаге я exe c kubeadm init --config kubeadm.conf --v=5
. Я получаю ошибку о значении clusterIp. Вот часть вывода:
I0220 00:16:27.625920 31630 clusterinfo.go:79] creating the RBAC rules for exposing the cluster-info ConfigMap in the kube-public namespace
I0220 00:16:27.947941 31630 kubeletfinalize.go:88] [kubelet-finalize] Assuming that kubelet client certificate rotation is enabled: found "/var/lib/kubelet/pki/kubelet-client-current.pem"
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0220 00:16:27.949398 31630 kubeletfinalize.go:132] [kubelet-finalize] Restarting the kubelet to enable client certificate rotation
[addons]: Migrating CoreDNS Corefile
I0220 00:16:28.447420 31630 dns.go:381] the CoreDNS configuration has been migrated and applied: .:53 {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
.
I0220 00:16:28.447465 31630 dns.go:382] the old migration has been saved in the CoreDNS ConfigMap under the name [Corefile-backup]
I0220 00:16:28.447486 31630 dns.go:383] The changes in the new CoreDNS Configuration are as follows:
Service "kube-dns" is invalid: spec.clusterIP: Invalid value: "10.10.0.10": field is immutable
unable to create/update the DNS service
k8s.io/kubernetes/cmd/kubeadm/app/phases/addons/dns.createDNSService
/workspace/anago-v1.17.0-rc.2.10+70132b0f130acc/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/phases/addons/dns/dns.go:323
k8s.io/kubernetes/cmd/kubeadm/app/phases/addons/dns.createCoreDNSAddon
/workspace/anago-v1.17.0-rc.2.10+70132b0f130acc/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/phases/addons/dns/dns.go:305
k8s.io/kubernetes/cmd/kubeadm/app/phases/addons/dns.coreDNSAddon
И мой конфигурационный файл выглядит так:
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 172.16.5.151
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: master02
# taints:
# - effect: NoSchedule
# key: node-role.kubernetes.io/master
---
apiServer:
certSANs:
- "172.16.5.150"
- "172.16.5.151"
- "172.16.5.152"
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
type: CoreDNS
etcd:
external:
endpoints:
- "https://172.16.5.150:2379"
- "https://172.16.5.151:2379"
- "https://172.16.5.152:2379"
caFile: /etc/k8s/pki/ca.pem
certFile: /etc/k8s/pki/etcd.pem
keyFile: /etc/k8s/pki/etcd.key
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.17.0
networking:
dnsDomain: cluster.local
serviceSubnet: 10.10.0.0/16
podSubnet: 192.168.0.0/16
scheduler: {}
Я проверил kube-apiserver.yaml, сгенерированный kubeadm. Настройки --service-cluster-ip-range = 10.10.0.0 / 16 содержат 10.10.0.10, который вы можете увидеть ниже:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
- kube-apiserver
- --advertise-address=172.16.5.151
- --allow-privileged=true
- --authorization-mode=Node,RBAC
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --enable-admission-plugins=NodeRestriction
- --enable-bootstrap-token-auth=true
- --etcd-cafile=/etc/k8s/pki/ca.pem
- --etcd-certfile=/etc/k8s/pki/etcd.pem
- --etcd-keyfile=/etc/k8s/pki/etcd.key
- --etcd-servers=https://172.16.5.150:2379,https://172.16.5.151:2379,https://172.16.5.152:2379
- --insecure-port=0
- --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
- --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
- --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
- --requestheader-allowed-names=front-proxy-client
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
- --requestheader-extra-headers-prefix=X-Remote-Extra-
- --requestheader-group-headers=X-Remote-Group
- --requestheader-username-headers=X-Remote-User
- --secure-port=6443
- --service-account-key-file=/etc/kubernetes/pki/sa.pub
- --service-cluster-ip-range=10.10.0.0/16
- --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
- --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
image: registry.aliyuncs.com/google_containers/kube-apiserver:v1.17.0
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
host: 172.16.5.151
path: /healthz
port: 6443
scheme: HTTPS
initialDelaySeconds: 15
timeoutSeconds: 15
name: kube-apiserver
resources:
requests:
cpu: 250m
volumeMounts:
- mountPath: /etc/ssl/certs
name: ca-certs
readOnly: true
- mountPath: /etc/pki
name: etc-pki
readOnly: true
- mountPath: /etc/k8s/pki
name: etcd-certs-0
readOnly: true
- mountPath: /etc/kubernetes/pki
name: k8s-certs
readOnly: true
hostNetwork: true
priorityClassName: system-cluster-critical
volumes:
- hostPath:
path: /etc/ssl/certs
type: DirectoryOrCreate
name: ca-certs
- hostPath:
path: /etc/pki
type: DirectoryOrCreate
name: etc-pki
- hostPath:
path: /etc/k8s/pki
type: DirectoryOrCreate
name: etcd-certs-0
- hostPath:
path: /etc/kubernetes/pki
type: DirectoryOrCreate
name: k8s-certs
status: {}
Как вы видите выше. весь диапазон обслуживания-ip был установлен в 10.10.0.0/16. Странно, что, когда я исполняю c "kubectl get sv c", я получаю кластер kubernetes: / 16. И то, что я изменил, не работает. Кто-нибудь знает, как настроить область действия service-ip-range. И как решить мою проблему?