Я использую kubernetes на centos7 и хочу создать постоянный том с помощью glusterfs, но я сталкиваюсь с ошибкой, что не могу ее решить. Я делаю все ниже, чтобы настроить его.
1 ) все экземпляры glusterfs работают на centos: 7 через lxd
$ lxc list
| gluster1 | RUNNING | 10.202.96.182 (eth0)
| gluster2 | RUNNING | 10.202.96.69 (eth0)
2) Мой том в glusterfs выглядит следующим образом:
$ gluster volume info
Volume Name: volk8s
Type: Replicate
Volume ID: c59fad43-04c6-4326-a22b-95a07f0ee493
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 10.202.96.182:/gluster/brick1
Brick2: 10.202.96.69:/gluster/brick1
Options Reconfigured:
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
performance.client-io-threads: off
3) Конфигурация kubernetes:
-->endPoints.yaml
apiVersion: v1
kind: Endpoints
metadata:
name: gluster-cluster
subsets:
- addresses:
- ip: 10.202.96.182
ports:
- port: 1
protocol: TCP
- addresses:
- ip: 10.202.96.69
ports:
- port: 1
protocol: TCP
-->pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: gluster-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
glusterfs:
endpoints: gluster-cluster
path: /volk8s
readOnly: false
persistentVolumeReclaimPolicy: Delete
-->pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: gluster-claim
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
--> pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: gluster-pod
labels:
name: gluster
spec:
containers:
- image: busybox
name: gluster-pod
command: ["sleep", " 60000"]
volumeMounts:
- name: gluster-vol1
mountPath: /usr/share/busybox
readOnly: false
volumes:
- name: gluster-vol1
persistentVolumeClaim:
claimName: gluster-claim
$ kubectl get po,ep,pv,pvc
NAME READY STATUS RESTARTS AGE
pod/gluster-pod 0/1 ContainerCreating 0 81m
NAME ENDPOINTS AGE
endpoints/gluster-cluster 10.202.96.182:1,10.202.96.69:1 87m
endpoints/kubernetes 172.42.42.100:6443 7d3h
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/gluster-pv 1Gi RWX Delete Bound default/gluster-claim 85m
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/gluster-claim Bound gluster-pv 1Gi RWX 83m
glusterfs-fuse установлен на сервере, на котором расположен kubernetes. $ glusterfs --version --- >> glusterfs 7.4
** Ошибка, с которой я пришел, это: **
$ kubectl describe get po gluster-pod
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedMount 16m (x16 over 75m) kubelet, kworker1.example.com Unable to attach or mount volumes: unmounted volumes=[gluster-vol1], unattached volumes=[gluster-vol1 default-token-5nntr]: timed out waiting for the condition
Warning FailedMount 100s (x45 over 74m) kubelet, kworker1.example.com (combined from similar events): MountVolume.SetUp failed for volume "gluster-pv" : mount failed: mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/548b47de-d657-4a43-9cc9-458dcb1cede1/volumes/kubernetes.io~glusterfs/gluster-pv --scope -- mount -t glusterfs -o auto_unmount,backup-volfile-servers=10.202.96.182:10.202.96.69,log-file=/var/lib/kubelet/plugins/kubernetes.io/glusterfs/gluster-pv/gluster-pod-glusterfs.log,log-level=ERROR 10.202.96.182:/volk8s /var/lib/kubelet/pods/548b47de-d657-4a43-9cc9-458dcb1cede1/volumes/kubernetes.io~glusterfs/gluster-pv
Output: Running scope as unit run-12661.scope.
mount: unknown filesystem type 'glusterfs'
, the following error information was pulled from the glusterfs log to help diagnose this issue: could not open log file for pod gluster-pod
, если кто-нибудь найдет что-то, что может помочь пожалуйста, скажите мне, спасибо.