Friday, August 16, 2013

Configure CFS cluster and CFS mount point


Summary:

This LAB was a hands-on for CFS implementation/configuration. CFS cluster will be configured and one mount point will be added.

Prerequisites:

Before you starting configuring VERITAS CFS cluster and start adding new CFS mount point for active-active disk usage; make sure you have:
  1. SF installed with right set of license keys
  2. You have configured VCS on all participant servers (in my case I have two servers guest1 and guest3 with RHEL 5)
  3. You have checked cluster status with "/opt/VRTSvcs/bin/hastatus –summary"
  4. Make sure you virtual IP is accessible
  5. Make sure you have disks presented on cluster participants/nodes. Storage is shared. Just for reference I used iSCSI storage.
  6. Take note of the disks those you have plans to configure in CFS cluster.
  7. Take note of the mount point and volume names.

Cluster Configuration

Following are the steps to configure CFS cluster on both LAB Servers (guest1 and guest3).
  1. Run "/opt/VRTSvxfs/cfs/bin/cfscluster status" to check if CFS is not already configured
    1. "[root@guest3 bin]# /opt/VRTSvxfs/cfs/bin/cfscluster status
    NODE CLUSTER MANAGER STATE CVM STATE
    guest1 running         not-running
    guest3 running         not-running

     
    Error: V-35-41: Cluster not configured for data sharing application
  2. Check cluster status by running "/opt/VRTSvcs/bin/hastatus -summary" and make sure cluster nodes are online.
    1. /opt/VRTSvcs/bin/hastatus –summary
      -- SYSTEM STATE
      -- System State Frozen

       

      A guest1 RUNNING 0
      A guest3 RUNNING 0

       

      -- GROUP STATE
      -- Group System Probed AutoDisabled State

       

      B ClusterService guest1 Y N ONLINE
      B ClusterService guest3 Y N OFFLINE

 

  1. Run "/opt/VRTSvxfs/cfs/bin/cfscluster config" to configure the CFS cluster services (cluster VERITAS volume manager)
    1. Let the configuration complete
  2. Check the CFS cluster services status with "/opt/VRTSvxfs/cfs/bin/cfscluster status"

     

    [root@guest3 bin]# /opt/VRTSvxfs/cfs/bin/cfscluster status


     

    Node         : guest1
    Cluster Manager     : running
    CVM state         : running
    No mount point registered with cluster configuration

     


     

    Node         : guest3
    Cluster Manager     : running
    CVM state         : running
    No mount point registered with cluster configuration
  3. Check which node is master with "vxdctl –c mode"
    [root@guest3 bin]# vxdctl -c mode
    mode: enabled: cluster active - SLAVE
    master: guest1

     

    In my case I am shifting my console to guest1 and will be adding new disks and mount points from there.
    [root@guest1 bin]# vxdctl -c mode
    mode: enabled: cluster active - MASTER
    master: guest1
  4. Check the available disks with "vxdisk list"
  5. Initialize the disk with "/opt/VRTS/bin/vxdisksetup –i disk_2" #in my case its disk_2 that initialize. It could be different from system to system on the bases of SAN.
  6. Verify that disk is initialized with "vxdisk list" – here is expected outcome "disk_2 auto:cdsdisk - - online"
  7. Add new disk group with "vxdg –s init cfsdg disk01=disk_2". Here cfsdg is the name of new disk group with disk_2 added as disk01 and –s is used to make it a shared DG.
  8. Verify the disk group with "vxdg list cfsdg"
  9. Verify the same from "vxprint –hrt output".
    [root@guest1 bin]# vxprint -hrt

    Disk group: cfsdg

     

    DG NAME NCONFIG NLOG MINORS GROUP-ID
    ST NAME STATE DM_CNT SPARE_CNT APPVOL_CNT
    DM NAME DEVICE TYPE PRIVLEN PUBLEN STATE
    RV NAME RLINK_CNT KSTATE STATE PRIMARY DATAVOLS SRL
    RL NAME RVG KSTATE STATE REM_HOST REM_DG REM_RLNK
    CO NAME CACHEVOL KSTATE STATE
    VT NAME RVG KSTATE STATE NVOLUME
    V NAME RVG/VSET/CO KSTATE STATE LENGTH READPOL PREFPLEX UTYPE
    PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE
    SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE
    SV NAME PLEX VOLNAME NVOLLAYR LENGTH [COL/]OFF AM/NM MODE
    SC NAME PLEX CACHE DISKOFFS LENGTH [COL/]OFF DEVICE MODE
    DC NAME PARENTVOL LOGVOL
    SP NAME SNAPVOL DCO
    EX NAME ASSOC VC PERMS MODE STATE
    SR NAME KSTATE

     

    dg cfsdg default default 9000 1376650536.16.guest1

     

    dm disk01 disk_2 auto 65536 2017024 -

    Note:
    Veritas volume manager reports available disk space in 512 byte blocks. There we have 2017024 blocks which calculates to about 985MB of space. #Hope math is easy J
  10. Create volume using "vxassist -g cfsdg make cfsvol 984M" and verify volume creation from "vxprint –hrt". Check for the line
    "sd disk01-01 cfsvol-01 disk01 0 2015232 0 disk_2 ENA"
  11. Create file system on new volume using
    "mkfs -t vxfs /dev/vx/rdsk/cfsdg/cfsvol" make sure you use right FStype switch and it varies across UNIX platforms
  12. Make new shared DG part of CFS using
    "/opt/VRTS/bin/cfsdgadm add cfsdg all=sw"
    Disk Group is being added to cluster configuration...
  13. Verify the above with
    [root@guest1 bin]# /opt/VRTS/bin/cfsdgadm display cfsdg

    NODE NAME ACTIVATION MODE
    guest1 sw
    guest3 sw
  14. Next is to add this volumes/filesystems to the cluster configuration so they can be mounted on any or all nodes. Mount points will be automatically created
    [root@guest1 bin]# /opt/VRTS/bin/cfsmntadm add cfsdg cfsvol /cfs all=cluster

    Mount Point is being added...
    /cfs added to the cluster-configuration
  15. Verify the newly added mount point with
    [root@guest1 bin]# /opt/VRTS/bin/cfscluster status


     

    Node : guest1
    Cluster Manager     : running
    CVM state         : running
    MOUNT POINT SHARED VOLUME DISK GROUP STATUS
    /cfs cfsvol cfsdg NOT MOUNTED

     


     

    Node : guest3
    Cluster Manager     : running
    CVM state         : running
    MOUNT POINT SHARED VOLUME DISK GROUP STATUS

    /cfs cfsvol cfsdg NOT MOUNTED
  16. Mount newly added mount point on both cluster nodes using
    [root@guest1 bin]# /opt/VRTS/bin/cfsmount all

    Mounting...
    [/dev/vx/dsk/cfsdg/cfsvol] mounted successfully at /cfs on guest1
    Mounting...
    [/dev/vx/dsk/cfsdg/cfsvol] mounted successfully at /cfs on guest3
  17. Check cfscluster status
    [root@guest1 bin]# /opt/VRTS/bin/cfscluster status

     

    Node : guest1
    Cluster Manager : running
    CVM state : running
    MOUNT POINT SHARED VOLUME DISK GROUP STATUS

    /cfs cfsvol cfsdg MOUNTED


     


     

    Node : guest3
    Cluster Manager : running
    CVM state : running
    MOUNT POINT SHARED VOLUME DISK GROUP STATUS

    /cfs cfsvol cfsdg MOUNTED
  18. Verify local mounts as
    #df –h
    /dev/vx/dsk/cfsdg/cfsvol
    984M 36M 890M 4% /cfs

     


     

    Congratulations! CFS cluster and CFS file system is configured and ready to for Active-Active usage (on both nodes in my case).

No comments: