Version 8 (modified by Andrii Salnikov, 10 years ago) (diff) |
---|
Configuration:
pcs cluster auth a1 a2 s1 g1 pcs cluster setup --start --enable --name pcs_cluster a1 a2 s1 g1 pcs stonith create pn7320 fence_altusen_snmp ipaddr=10.25.255.XXX community=XXX pcmk_host_list="s1,s2,s3,s4,s5,s6,s7,s8,s9,s10,a1,a2,g1,g2" \ pcmk_host_map="s1:S1P1;s2:S1P2;s3:S1P3;s4:S1P4;s5:S1P5;s6:S1P6;s7:S1P7;s8:S1P8;s9:S1P13;s10:S1P14;a1:S1P15;a2:S1P16;g1:S1P17;g2:S1P18" \ op monitor interval=60s pcs stonith create ipmi_s1 fence_ipmilan ipaddr=10.25.255.1 login=ADMIN passwd=XXX pcmk_host_list=s1 op monitor interval=60s pcs stonith level add 1 s1 ipmi_s1 pcs stonith level add 2 s1 pn7320 pcs stonith level add 1 a1 pn7320 pcs stonith level add 1 a2 pn7320 pcs stonith level add 1 g1 pn7320 pcs constraint location ipmi_s1 avoids s1=INFINITY pcs property set no-quorum-policy=freeze pcs resource create dlm ocf:pacemaker:controld op monitor interval=30s on-fail=fence clone interleave=true ordered=true pcs resource create clvmd ocf:heartbeat:clvm op monitor interval=30s on-fail=fence clone interleave=true ordered=true pcs constraint order start dlm-clone then clvmd-clone pcs constraint colocation add clvmd-clone with dlm-clone pcs resource create gfs2home Filesystem device=/dev/GFS2HOME/HOME directory=/home fstype=gfs2 "options=relatime" op monitor interval=10s on-fail=fence clone interleave=true pcs constraint order start clvmd-clone then gfs2home-clone pcs constraint colocation add gfs2home-clone with clvmd-clone
Clustered IP for a-nodes for load balancing and failover:
pcs resource create a-nodes-IP ocf:heartbeat:IPaddr2 ip=10.25.240.250 cidr_netmask=32 op monitor interval=10s pcs resource clone a-nodes-IP clone-max=2 clone-node-max=2 globally-unique=true pcs resource update a-nodes-IP clusterip_hash=sourceip pcs constraint location a-nodes-IP-clone prefers a1=200 a2=200 pcs constraint location a-nodes-IP-clone avoids s1 s2 g1 pcs resource update a-nodes-IP-clone resource-stickiness=-1
DRBD device for /opt
yum -y install drbd-pacemaker drbd-udev cat <<END > /etc/drbd.d/a-nodes-opt.conf resource drbdopt { protocol C; meta-disk internal; device /dev/drbd1; syncer { verify-alg sha1; } net { allow-two-primaries; } on a1.cluster.univ.kiev.ua { disk "/dev/disk/by-path/pci-0000:04:02.0-scsi-0:0:3:0"; address 10.25.240.251:7789; } on a2.cluster.univ.kiev.ua { disk "/dev/disk/by-path/pci-0000:04:02.0-scsi-0:0:3:0"; address 10.25.240.252:7789; } } END drbdadm create-md drbdopt modprobe drbd drbdadm up drbdopt drbdadm primary --force drbdopt cat /proc/drbd
Status check commands in addition to pcs status:
corosync-cmapctl | grep members corosync-cfgtool -s crm_verify -L -V
Useful for debug:
crm_simulate -sL (resource locations weights)
Logs:
- check /var/log/messages on every node