Run "/opt/VRTSvxfs/cfs/bin/cfscluster config" to configure the CFS cluster services (cluster VERITAS volume manager)
- Let the configuration complete
Check the CFS cluster services status with "/opt/VRTSvxfs/cfs/bin/cfscluster status"
[root@guest3 bin]# /opt/VRTSvxfs/cfs/bin/cfscluster status
Node : guest1
Cluster Manager : running
CVM state : running
No mount point registered with cluster configuration
Node : guest3
Cluster Manager : running
CVM state : running
No mount point registered with cluster configuration
Check which node is master with "vxdctl –c mode"
[root@guest3 bin]# vxdctl -c mode
mode: enabled: cluster active - SLAVE
master: guest1
In my case I am shifting my console to guest1 and will be adding new disks and mount points from there.
[root@guest1 bin]# vxdctl -c mode
mode: enabled: cluster active - MASTER
master: guest1
- Check the available disks with "vxdisk list"
- Initialize the disk with "/opt/VRTS/bin/vxdisksetup –i disk_2" #in my case its disk_2 that initialize. It could be different from system to system on the bases of SAN.
- Verify that disk is initialized with "vxdisk list" – here is expected outcome "disk_2 auto:cdsdisk - - online"
- Add new disk group with "vxdg –s init cfsdg disk01=disk_2". Here cfsdg is the name of new disk group with disk_2 added as disk01 and –s is used to make it a shared DG.
- Verify the disk group with "vxdg list cfsdg"
Verify the same from "vxprint –hrt output".
[root@guest1 bin]# vxprint -hrt
Disk group: cfsdg
DG NAME NCONFIG NLOG MINORS GROUP-ID
ST NAME STATE DM_CNT SPARE_CNT APPVOL_CNT
DM NAME DEVICE TYPE PRIVLEN PUBLEN STATE
RV NAME RLINK_CNT KSTATE STATE PRIMARY DATAVOLS SRL
RL NAME RVG KSTATE STATE REM_HOST REM_DG REM_RLNK
CO NAME CACHEVOL KSTATE STATE
VT NAME RVG KSTATE STATE NVOLUME
V NAME RVG/VSET/CO KSTATE STATE LENGTH READPOL PREFPLEX UTYPE
PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE
SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE
SV NAME PLEX VOLNAME NVOLLAYR LENGTH [COL/]OFF AM/NM MODE
SC NAME PLEX CACHE DISKOFFS LENGTH [COL/]OFF DEVICE MODE
DC NAME PARENTVOL LOGVOL
SP NAME SNAPVOL DCO
EX NAME ASSOC VC PERMS MODE STATE
SR NAME KSTATE
dg cfsdg default default 9000 1376650536.16.guest1
dm disk01 disk_2 auto 65536 2017024 -
Note:
Veritas volume manager reports available disk space in 512 byte blocks. There we have 2017024 blocks which calculates to about 985MB of space. #Hope math is easy J
Create volume using "vxassist -g cfsdg make cfsvol 984M" and verify volume creation from "vxprint –hrt". Check for the line
"sd disk01-01 cfsvol-01 disk01 0 2015232 0 disk_2 ENA"
Create file system on new volume using
"mkfs -t vxfs /dev/vx/rdsk/cfsdg/cfsvol" make sure you use right FStype switch and it varies across UNIX platforms
Make new shared DG part of CFS using
"/opt/VRTS/bin/cfsdgadm add cfsdg all=sw"
Disk Group is being added to cluster configuration...
Verify the above with
[root@guest1 bin]# /opt/VRTS/bin/cfsdgadm display cfsdg
NODE NAME ACTIVATION MODE
guest1 sw
guest3 sw
Next is to add this volumes/filesystems to the cluster configuration so they can be mounted on any or all nodes. Mount points will be automatically created
[root@guest1 bin]# /opt/VRTS/bin/cfsmntadm add cfsdg cfsvol /cfs all=cluster
Mount Point is being added...
/cfs added to the cluster-configuration
Verify the newly added mount point with
[root@guest1 bin]# /opt/VRTS/bin/cfscluster status
Node : guest1
Cluster Manager : running
CVM state : running
MOUNT POINT SHARED VOLUME DISK GROUP STATUS
/cfs cfsvol cfsdg NOT MOUNTED
Node : guest3
Cluster Manager : running
CVM state : running
MOUNT POINT SHARED VOLUME DISK GROUP STATUS
/cfs cfsvol cfsdg NOT MOUNTED
Mount newly added mount point on both cluster nodes using
[root@guest1 bin]# /opt/VRTS/bin/cfsmount all
Mounting...
[/dev/vx/dsk/cfsdg/cfsvol] mounted successfully at /cfs on guest1
Mounting...
[/dev/vx/dsk/cfsdg/cfsvol] mounted successfully at /cfs on guest3
Check cfscluster status
[root@guest1 bin]# /opt/VRTS/bin/cfscluster status
Node : guest1
Cluster Manager : running
CVM state : running
MOUNT POINT SHARED VOLUME DISK GROUP STATUS
/cfs cfsvol cfsdg MOUNTED
Node : guest3
Cluster Manager : running
CVM state : running
MOUNT POINT SHARED VOLUME DISK GROUP STATUS
/cfs cfsvol cfsdg MOUNTED
Verify local mounts as
#df –h
/dev/vx/dsk/cfsdg/cfsvol
984M 36M 890M 4% /cfs
Congratulations! CFS cluster and CFS file system is configured and ready to for Active-Active usage (on both nodes in my case).
No comments:
Post a Comment