кардиостимулятор + проблема с DRBD (автоматически ведомый -> мастер) - PullRequest
0 голосов
/ 24 сентября 2019

Я новичок в изучении кардиостимулятора и DRBD.

Моя конфигурация


узлы: vm-test06, vm-test07
ОС: centos 7.6.1810
шт: pcs-0.9.165-6.el7.centos.x86_64
drbd84: drbd84-utils-9.6.0-1.el7.elrepo.x86_64, kmod-drbd84-8.4.11-1.1.el7_6.elrepo.x86_64

[шт. статус]

[root@vm-test07 ~]# pcs status
Cluster name: mycluster
Stack: corosync
Current DC: vm-test07 (version 1.1.19-8.el7_6.4-c3c624ea3d) - partition with quorum
Last updated: Tue Sep 24 15:41:50 2019
Last change: Tue Sep 24 15:25:21 2019 by root via cibadmin on vm-test07

2 nodes configured
5 resources configured

Online: [ vm-test06 vm-test07 ]

Full list of resources:

 VirtIP (ocf::heartbeat:IPaddr2):       Started vm-test07
 Httpd  (ocf::heartbeat:apache):        Started vm-test07
 Master/Slave Set: DrbdDataClone [DrbdData]
     Masters: [ vm-test07 ]
     Slaves: [ vm-test06 ]
 DrbdFS (ocf::heartbeat:Filesystem):    Started vm-test07

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

[шт. ограничения]

[root@vm-test07 ~]# pcs constraint --full
[root@vm-test07 ~]# pcs constraint --full
Location Constraints:
Ordering Constraints:
  start VirtIP then start Httpd (kind:Mandatory) (id:order-VirtIP-Httpd-mandatory)
  promote DrbdDataClone then start DrbdFS (kind:Mandatory) (id:order-DrbdDataClone-DrbdFS-mandatory)
  start DrbdFS then start Httpd (kind:Mandatory) (id:order-DrbdFS-Httpd-mandatory)
  start VirtIP then start DrbdDataClone (kind:Mandatory) (id:order-VirtIP-DrbdDataClone-mandatory)
Colocation Constraints:
  Httpd with VirtIP (score:INFINITY) (id:colocation-Httpd-VirtIP-INFINITY)
  Httpd with DrbdFS (score:INFINITY) (id:colocation-Httpd-DrbdFS-INFINITY)
  DrbdFS with DrbdDataClone (score:INFINITY) (with-rsc-role:Master) (id:colocation-DrbdFS-DrbdDataClone-INFINITY)
  DrbdFS with VirtIP (score:INFINITY) (id:colocation-DrbdFS-VirtIP-INFINITY)
Ticket Constraints:

1.когда я останавливаю главный узел (vm-test07)

[root@vm-test07 ~]# pcs cluster stop vm-test07

[root@vm-test06 ~]# pcs status
Cluster name: mycluster
Stack: corosync
Current DC: vm-test06 (version 1.1.19-8.el7_6.4-c3c624ea3d) - partition with quorum
Last updated: Tue Sep 24 15:42:26 2019
Last change: Tue Sep 24 15:25:21 2019 by root via cibadmin on vm-test07

2 nodes configured
5 resources configured

Online: [ vm-test06 ]
OFFLINE: [ vm-test07 ]

Full list of resources:

 VirtIP (ocf::heartbeat:IPaddr2):       Started vm-test06
 Httpd  (ocf::heartbeat:apache):        Stopped
 Master/Slave Set: DrbdDataClone [DrbdData]
     Slaves: [ vm-test06 ]
     Stopped: [ vm-test07 ]
 DrbdFS (ocf::heartbeat:Filesystem):    Stopped

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled


[root@vm-test06 ~]# drbdadm status
test role:Secondary
  disk:UpToDate
  peer connection:Connecting

[root@vm-test07 ~]# drbdadm status
test: No such resource
Command 'drbdsetup-84 status test' terminated with exit code 10

Что я хочу знать

  1. Когда я останавливаю главный узел, как я могу сделать подчиненный узел главным узломавтоматически?
    ex)
Master/Slave Set: DrbdDataClone [DrbdData]
Masters: [ vm-test06 ] (not Slaves: [ vm-test06 ])

Когда я набираю pcs resources debug-start DrbdData, я могу сделать раба мастером.Но это не автоматический метод.Итак, я хочу знать автоматический метод.

Пожалуйста, помогите моей проблеме.Спасибо.

...