Încerc să implementez o arhitectură de cluster HA NFS cu PCS pe Rocky Linux 8.5. Versiunile curente de pachete legate de kernel și nfs, configurațiile PC sunt prezentate cu detalii mai jos.
Nu pot să declar IP specific pentru instanțe NFSD (rpc.statd, rpc.mountd etc) pentru a se lega. Orice aș face, serviciile rămân obligatorii 0.0.0.0:$default-ports
.
Aș dori să inițiez un alt „ocf:Hearthbeat:nfsserver” cu o anumită resursă VirtualIP pentru fiecare grup de resurse NFS(bloc). Când declar al doilea grup de resurse de partajare NFS pe același nod al clusterului (unde plănuiesc să am mai mult NFS decât dimensiunea clusterului), resursele „ocf:Hearthbeat:nfsserver” se blochează reciproc, iar una câștigă, iar cealaltă resursă intră. la starea „blocata”.
[root@node1 ~]# uname -a
Linux node1.local 4.18.0-348.12.2.el8_5.x86_64 #1 SMP miercuri 19 ian 17:53:40 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
[root@node1 ~]# rpm -qa nfs* rpc*
nfs-utils-2.3.3-46.el8.x86_64
rpcbind-1.2.5-8.el8.x86_64
[rădăcină@nodul1 ~]#
Starea clusterului PCS
[root@node1 ~]# stare de buc
Nume cluster: cluster01
Rezumatul grupului:
* Stivă: corosync
* DC curent: node5 (versiunea 2.1.0-8.el8-7c3f660707) - partiție cu cvorum
* Ultima actualizare: joi, 24 martie 13:10:09 2022
* Ultima modificare: joi, 24 martie 13:03:48 2022 de către root prin crm_resource pe node3
* 5 noduri configurate
* 5 instanțe de resurse configurate
Lista de noduri:
* Online: [nod1 nod2 node3 node4 node5]
Lista completă a resurselor:
* Grup de resurse: Group_SHARE:
* ROOT-FS_SHARE (ocf::heartbeat:Filesystem): Nodul2 pornit
* NFSD_SHARE (ocf::heartbeat:nfsserver): Nodul2 pornit
* NFS_SHARE (ocf::heartbeat:exportfs): a pornit nodul2
* NFS-IP_SHARE (ocf::heartbeat:IPaddr2): Nodul2 pornit
* NFS-NOTIFY_SHARE (ocf::heartbeat:nfsnotify): a pornit nodul2
Stare demon:
corosync: activ/activat
stimulator cardiac: activ/activat
pcsd: activ/activat
[rădăcină@nodul1 ~]#
Ieșire PCS Resource Config
[root@node2 ~]# bucăți de configurare a resurselor
Grup: Group_SHARE
Resursa: ROOT-FS_SHARE (clasa=ocf provider=heartbeat type=Filesystem)
Atribute: device=/dev/disk/by-id/wwn-0x6001405ce6b7033688d497a91aa23547 directory=/srv/block/SHARE fstype=xfs
Operații: monitor interval=20s timeout=40s (ROOT-FS_SHARE-monitor-interval-20s)
interval de pornire=0s timeout=60s (ROOT-FS_SHARE-start-interval-0s)
interval de oprire=0s timeout=60s (ROOT-FS_SHARE-stop-interval-0s)
Resursă: NFSD_SHARE (clasă=ocf provider=heartbeat type=nfsserver)
Atribute: nfs_ip=10.1.31.100 nfs_no_notify=true nfs_shared_infodir=/srv/block/SHARE/nfsinfo/
Operații: interval monitor=10s timeout=20s (NFSD_SHARE-monitor-interval-10s)
interval de pornire=0s timeout=40s (NFSD_SHARE-start-interval-0s)
interval de oprire=0s timeout=20s (NFSD_SHARE-stop-interval-0s)
Resursa: NFS_SHARE (clasa=ocf provider=heartbeat type=exportfs)
Atribute: clientspec=10.1.31.0/255.255.255.0 directory=/srv/block/SHARE/SHARE fsid=0 options=rw,sync,no_root_squash
Operații: interval monitor=10s timeout=20s (NFS_SHARE-monitor-interval-10s)
interval de pornire=0s timeout=40s (NFS_SHARE-start-interval-0s)
interval de oprire=0s timeout=120s (NFS_SHARE-stop-interval-0s)
Resursă: NFS-IP_SHARE (clasă=ocf provider=heartbeat type=IPaddr2)
Atribute: cidr_netmask=24 ip=10.1.31.100 nic=team31
Operații: interval de monitorizare=30s (NFS-IP_SHARE-monitor-interval-30s)
interval de pornire=0s timeout=20s (NFS-IP_SHARE-start-interval-0s)
interval de oprire=0s timeout=20s (NFS-IP_SHARE-stop-interval-0s)
Resursa: NFS-NOTIFY_SHARE (clasa=ocf provider=heartbeat type=nfsnotify)
Atribute: source_host=SHARE.local
Operații: interval monitor=30s timeout=90s (NFS-NOTIFY_SHARE-monitor-interval-30s)
interval de reîncărcare=0s timeout=90s (NFS-NOTIFY_SHARE-reload-interval-0s)
interval de pornire=0s timeout=90s (NFS-NOTIFY_SHARE-start-interval-0s)
interval de oprire=0s timeout=90s (NFS-NOTIFY_SHARE-stop-interval-0s)
[rădăcină@nodul2 ~]#
IP virtual s-a legat cu succes pe nodul 2
[root@node2 ~]# ip -4 addr show team31
6: team31: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default qlen 1000
inet 10.1.31.2/24 brd 10.1.31.255 domeniul de aplicare global noprefixroute team31
valid_lft pentru totdeauna preferred_lft pentru totdeauna
inet 10.1.31.100/24 brd 10.1.31.255 domeniul de aplicare echipa secundară globală31
valid_lft pentru totdeauna preferred_lft pentru totdeauna
[rădăcină@nodul2 ~]#
TCP LISTEN se leagă
[root@node2 ~]# netstat -punta | grep ASCULTĂ
tcp 0 0 0.0.0.0:2049 0.0.0.0:* ASCULTĂ -
tcp 0 0 127.0.0.1:44321 0.0.0.0:* ASCULTĂ 1803/pmcd
tcp 0 0 0.0.0.0:34661 0.0.0.0:* ASCULTĂ 630273/rpc.statd
tcp 0 0 127.0.0.1:199 0.0.0.0:* ASCULTĂ 1257/snmpd
tcp 0 0 127.0.0.1:4330 0.0.0.0:* ASCULTĂ 2834/pmlogger
tcp 0 0 10.20.101.136:2379 0.0.0.0:* ASCULTĂ 3285/etcd
tcp 0 0 10.20.101.136:2380 0.0.0.0:* ASCULTĂ 3285/etcd
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 1/systemd
tcp 0 0 0.0.0.0:20048 0.0.0.0:* ASCULTĂ 630282/rpc.mountd
tcp 0 0 0.0.0.0:80 0.0.0.0:* ASCULTA 317707/nginx: maste
tcp 0 0 0.0.0.0:2224 0.0.0.0:* LISTEN 3725/platform-pytho
tcp 0 0 0.0.0.0:22 0.0.0.0:* ASCULTĂ 1170/sshd
tcp 0 0 0.0.0.0:41017 0.0.0.0:* ASCULTĂ -
tcp 0 0 0.0.0.0:443 0.0.0.0:* ASCULTA 317707/nginx: maste
tcp 0 0 0.0.0.0:35261 0.0.0.0:* ASCULTĂ -
tcp6 0 0 :::2049 :::* ASCULTĂ -
tcp6 0 0 ::1:44321 :::* ASCULTĂ 1803/pmcd
tcp6 0 0 ::1:4330 :::* ASCULTĂ 2834/pmlogger
tcp6 0 0 :::111 :::* ASCULTĂ 1/systemd
tcp6 0 0 :::20048 :::* ASCULTĂ 630282/rpc.mountd
tcp6 0 0 :::2224 :::* ASCULTĂ 3725/platform-pytho
tcp6 0 0 :::37329 :::* ASCULTĂ -
tcp6 0 0 :::22 :::* ASCULTĂ 1170/sshd
tcp6 0 0 :::41179 :::* ASCULTĂ 630273/rpc.statd
tcp6 0 0 :::43487 :::* ASCULTĂ -
[rădăcină@nodul2 ~]#
Resurse PCS Cluster (format cib.xml, în cazul în care aveți nevoie de deep dive)
<resources>
<group id="Group_SHARE">
<primitive class="ocf" id="ROOT-FS_SHARE" provider="heartbeat" type="Filesystem">
<instance_attributes id="ROOT-FS_SHARE-instance_attributes">
<nvpair id="ROOT-FS_SHARE-instance_attributes-device" name="device" value="/dev/disk/by-id/wwn-0x6001405ce6b7033688d497a91aa23547"/>
<nvpair id="ROOT-FS_SHARE-instance_attributes-directory" name="directory" value="/srv/block/SHARE"/>
<nvpair id="ROOT-FS_SHARE-instance_attributes-fstype" name="fstype" value="xfs"/>
</instance_attributes>
<operations>
<op id="ROOT-FS_SHARE-monitor-interval-20s" interval="20s" name="monitor" timeout="40s"/>
<op id="ROOT-FS_SHARE-start-interval-0s" interval="0s" name="start" timeout="60s"/>
<op id="ROOT-FS_SHARE-stop-interval-0s" interval="0s" name="stop" timeout="60s"/>
</operations>
</primitive>
<primitive class="ocf" id="NFSD_SHARE" provider="heartbeat" type="nfsserver">
<instance_attributes id="NFSD_SHARE-instance_attributes">
<nvpair id="NFSD_SHARE-instance_attributes-nfs_ip" name="nfs_ip" value="10.1.31.100"/>
<nvpair id="NFSD_SHARE-instance_attributes-nfs_no_notify" name="nfs_no_notify" value="true"/>
<nvpair id="NFSD_SHARE-instance_attributes-nfs_shared_infodir" name="nfs_shared_infodir" value="/srv/block/SHARE/nfsinfo/"/>
</instance_attributes>
<operations>
<op id="NFSD_SHARE-monitor-interval-10s" interval="10s" name="monitor" timeout="20s"/>
<op id="NFSD_SHARE-start-interval-0s" interval="0s" name="start" timeout="40s"/>
<op id="NFSD_SHARE-stop-interval-0s" interval="0s" name="stop" timeout="20s"/>
</operations>
</primitive>
<primitive class="ocf" id="NFS_SHARE" provider="heartbeat" type="exportfs">
<instance_attributes id="NFS_SHARE-instance_attributes">
<nvpair id="NFS_SHARE-instance_attributes-clientspec" name="clientspec" value="10.1.31.0/255.255.255.0"/>
<nvpair id="NFS_SHARE-instance_attributes-directory" name="directory" value="/srv/block/SHARE/SHARE"/>
<nvpair id="NFS_SHARE-instance_attributes-fsid" name="fsid" value="0"/>
<nvpair id="NFS_SHARE-instance_attributes-options" name="options" value="rw,sync,no_root_squash"/>
</instance_attributes>
<operations>
<op id="NFS_SHARE-monitor-interval-10s" interval="10s" name="monitor" timeout="20s"/>
<op id="NFS_SHARE-start-interval-0s" interval="0s" name="start" timeout="40s"/>
<op id="NFS_SHARE-stop-interval-0s" interval="0s" name="stop" timeout="120s"/>
</operations>
</primitive>
<primitive class="ocf" id="NFS-IP_SHARE" provider="heartbeat" type="IPaddr2">
<instance_attributes id="NFS-IP_SHARE-instance_attributes">
<nvpair id="NFS-IP_SHARE-instance_attributes-cidr_netmask" name="cidr_netmask" value="24"/>
<nvpair id="NFS-IP_SHARE-instance_attributes-ip" name="ip" value="10.1.31.100"/>
<nvpair id="NFS-IP_SHARE-instance_attributes-nic" name="nic" value="team31"/>
</instance_attributes>
<operations>
<op id="NFS-IP_SHARE-monitor-interval-30s" interval="30s" name="monitor"/>
<op id="NFS-IP_SHARE-start-interval-0s" interval="0s" name="start" timeout="20s"/>
<op id="NFS-IP_SHARE-stop-interval-0s" interval="0s" name="stop" timeout="20s"/>
</operations>
</primitive>
<primitive class="ocf" id="NFS-NOTIFY_SHARE" provider="heartbeat" type="nfsnotify">
<instance_attributes id="NFS-NOTIFY_SHARE-instance_attributes">
<nvpair id="NFS-NOTIFY_SHARE-instance_attributes-source_host" name="source_host" value="SHARE.local"/>
</instance_attributes>
<operations>
<op id="NFS-NOTIFY_SHARE-monitor-interval-30s" interval="30s" name="monitor" timeout="90s"/>
<op id="NFS-NOTIFY_SHARE-reload-interval-0s" interval="0s" name="reload" timeout="90s"/>
<op id="NFS-NOTIFY_SHARE-start-interval-0s" interval="0s" name="start" timeout="90s"/>
<op id="NFS-NOTIFY_SHARE-stop-interval-0s" interval="0s" name="stop" timeout="90s"/>
</operations>
</primitive>
</group>
</resources>
EDIT-1
Se pare OpenClusterFramework - resursa nfsserver nu folosește deloc câmpul nfs_ip pentru „rpc.nfsd -H $nfs_ip”. Pe RockyLinux 8.5, nici această resursă nu ne permite suprascrieți comportamentul implicit de acceptare a fiecărei versiuni NFS. RockyLinux 8.5 utilizează următoarele pachete resource-agents-4.1.1-98.el8_5.2.x86_64.
Voi încerca să-mi rezolv problema specificând resurse personalizate pentru pc-uri systemd pentru [email protected]