Развертываем кластер с kubeadm (1 мастер 4 рабочий узел).
$ kubectl describe node worker1
Name: worker1
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/hostname=worker1
role=slave1
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Tue, 24 Sep 2019 14:15:42 +0330
Taints: node.kubernetes.io/disk-pressure:NoSchedule
Unschedulable: false
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
NetworkUnavailable False Tue, 24 Sep 2019 14:16:19 +0330 Tue, 24 Sep 2019 14:16:19 +0330 WeaveIsUp Weave pod has set this
OutOfDisk False Mon, 07 Oct 2019 15:35:53 +0330 Sun, 06 Oct 2019 02:21:55 +0330 KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Mon, 07 Oct 2019 15:35:53 +0330 Sun, 06 Oct 2019 02:21:55 +0330 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure True Mon, 07 Oct 2019 15:35:53 +0330 Mon, 07 Oct 2019 13:58:23 +0330 KubeletHasDiskPressure kubelet has disk pressure
PIDPressure False Mon, 07 Oct 2019 15:35:53 +0330 Tue, 24 Sep 2019 14:15:42 +0330 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 07 Oct 2019 15:35:53 +0330 Sun, 06 Oct 2019 02:21:55 +0330 KubeletReady kubelet is posting ready status. AppArmor enabled
Addresses:
InternalIP: 192.168.88.206
Hostname: worker1
Capacity:
attachable-volumes-azure-disk: 16
cpu: 4
ephemeral-storage: 19525500Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 16432464Ki
pods: 110
Allocatable:
attachable-volumes-azure-disk: 16
cpu: 4
ephemeral-storage: 17994700771
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 16330064Ki
pods: 110
System Info:
Machine ID: 2fc8f9eejgh5274kg1ab3f5b6570a8
System UUID: 52454D5843-391B-5454-BC35-E0EC5454D19A
Boot ID: 5454514e-4e5f-4e46-af9b-2809f394e06f
Kernel Version: 4.4.0-116-generic
OS Image: Ubuntu 16.04.4 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://17.3.2
Kubelet Version: v1.12.1
Kube-Proxy Version: v1.12.1
Non-terminated Pods: (0 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 0 (0%) 0 (0%)
memory 0 (0%) 0 (0%)
attachable-volumes-azure-disk 0 0
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 45m kube-proxy, worker1 Starting kube-proxy.
Normal Starting 23m kube-proxy, worker1 Starting kube-proxy.
Warning EvictionThresholdMet 2m29s (x502 over 5d5h) kubelet, worker1 Attempting to reclaim ephemeral-storage
Normal Starting 75s kube-proxy, worker1 Starting kube-proxy.
Как видно из описания работника1, существует давление на диск ( эфемерная память: 19525500Ki ). Мы монтируем жесткий диск на / dev / sdb1 .
В worker1:
$ df -h`
Filesystem Size Used Avail Use% Mounted on
udev 7.9G 0 7.9G 0% /dev
tmpfs 1.6G 163M 1.5G 11% /run
/dev/sda1 19G 16G 2.4G 87% /
tmpfs 7.9G 5.1M 7.9G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup
/dev/sdb1 99G 61M 94G 1% /data
tmpfs 1.6G 0 1.6G 0% /run/user/1003
Но проблема все еще существует. Как я могу сказать kubelet добавить эту точку монтирования в эфемерное хранилище worker1? На самом деле, как мы можем увеличить эфемерное хранилище узла в кластере Kubernetes?