Friday, August 16, 2013

Thinking rational and requesting pardon


This Friday on Jumma prayer, it happened – I concentrated the "dua" which Imam sb was requesting to Allah. It was all about our sins that we may have done due to which we are flooded every year. I really didn't find any objection within myself over this philosophy but I thought – our sins might not be the only reason for these earthly disasters. On a closer look, EU and US have these problems too. For instance, I thought of the Japan and its last disaster.
I think, it's our negligence and lack of interest to fix the issue that is playing. On a stronger note, it's our struggle to not to change and to resist every factor that tends to change ourselves. Even this attitude encompasses Godly hints that encourage us to look upon ourselves and make us understand the need to adapt.
Birth of every single child is a clear sign that Allah has accounted for the recent technology changes and social evolution, we are passing through. Children were not so "fast and furious" even a decade back. I am 36 and I see people above 50 years are quite afraid of these latest technology and gadgets, even if they want to learn it.

 

May Allah bless us with right words while praying!

Configure CFS cluster and CFS mount point


Summary:

This LAB was a hands-on for CFS implementation/configuration. CFS cluster will be configured and one mount point will be added.

Prerequisites:

Before you starting configuring VERITAS CFS cluster and start adding new CFS mount point for active-active disk usage; make sure you have:
  1. SF installed with right set of license keys
  2. You have configured VCS on all participant servers (in my case I have two servers guest1 and guest3 with RHEL 5)
  3. You have checked cluster status with "/opt/VRTSvcs/bin/hastatus –summary"
  4. Make sure you virtual IP is accessible
  5. Make sure you have disks presented on cluster participants/nodes. Storage is shared. Just for reference I used iSCSI storage.
  6. Take note of the disks those you have plans to configure in CFS cluster.
  7. Take note of the mount point and volume names.

Cluster Configuration

Following are the steps to configure CFS cluster on both LAB Servers (guest1 and guest3).
  1. Run "/opt/VRTSvxfs/cfs/bin/cfscluster status" to check if CFS is not already configured
    1. "[root@guest3 bin]# /opt/VRTSvxfs/cfs/bin/cfscluster status
    NODE CLUSTER MANAGER STATE CVM STATE
    guest1 running         not-running
    guest3 running         not-running

     
    Error: V-35-41: Cluster not configured for data sharing application
  2. Check cluster status by running "/opt/VRTSvcs/bin/hastatus -summary" and make sure cluster nodes are online.
    1. /opt/VRTSvcs/bin/hastatus –summary
      -- SYSTEM STATE
      -- System State Frozen

       

      A guest1 RUNNING 0
      A guest3 RUNNING 0

       

      -- GROUP STATE
      -- Group System Probed AutoDisabled State

       

      B ClusterService guest1 Y N ONLINE
      B ClusterService guest3 Y N OFFLINE

 

  1. Run "/opt/VRTSvxfs/cfs/bin/cfscluster config" to configure the CFS cluster services (cluster VERITAS volume manager)
    1. Let the configuration complete
  2. Check the CFS cluster services status with "/opt/VRTSvxfs/cfs/bin/cfscluster status"

     

    [root@guest3 bin]# /opt/VRTSvxfs/cfs/bin/cfscluster status


     

    Node         : guest1
    Cluster Manager     : running
    CVM state         : running
    No mount point registered with cluster configuration

     


     

    Node         : guest3
    Cluster Manager     : running
    CVM state         : running
    No mount point registered with cluster configuration
  3. Check which node is master with "vxdctl –c mode"
    [root@guest3 bin]# vxdctl -c mode
    mode: enabled: cluster active - SLAVE
    master: guest1

     

    In my case I am shifting my console to guest1 and will be adding new disks and mount points from there.
    [root@guest1 bin]# vxdctl -c mode
    mode: enabled: cluster active - MASTER
    master: guest1
  4. Check the available disks with "vxdisk list"
  5. Initialize the disk with "/opt/VRTS/bin/vxdisksetup –i disk_2" #in my case its disk_2 that initialize. It could be different from system to system on the bases of SAN.
  6. Verify that disk is initialized with "vxdisk list" – here is expected outcome "disk_2 auto:cdsdisk - - online"
  7. Add new disk group with "vxdg –s init cfsdg disk01=disk_2". Here cfsdg is the name of new disk group with disk_2 added as disk01 and –s is used to make it a shared DG.
  8. Verify the disk group with "vxdg list cfsdg"
  9. Verify the same from "vxprint –hrt output".
    [root@guest1 bin]# vxprint -hrt

    Disk group: cfsdg

     

    DG NAME NCONFIG NLOG MINORS GROUP-ID
    ST NAME STATE DM_CNT SPARE_CNT APPVOL_CNT
    DM NAME DEVICE TYPE PRIVLEN PUBLEN STATE
    RV NAME RLINK_CNT KSTATE STATE PRIMARY DATAVOLS SRL
    RL NAME RVG KSTATE STATE REM_HOST REM_DG REM_RLNK
    CO NAME CACHEVOL KSTATE STATE
    VT NAME RVG KSTATE STATE NVOLUME
    V NAME RVG/VSET/CO KSTATE STATE LENGTH READPOL PREFPLEX UTYPE
    PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE
    SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE
    SV NAME PLEX VOLNAME NVOLLAYR LENGTH [COL/]OFF AM/NM MODE
    SC NAME PLEX CACHE DISKOFFS LENGTH [COL/]OFF DEVICE MODE
    DC NAME PARENTVOL LOGVOL
    SP NAME SNAPVOL DCO
    EX NAME ASSOC VC PERMS MODE STATE
    SR NAME KSTATE

     

    dg cfsdg default default 9000 1376650536.16.guest1

     

    dm disk01 disk_2 auto 65536 2017024 -

    Note:
    Veritas volume manager reports available disk space in 512 byte blocks. There we have 2017024 blocks which calculates to about 985MB of space. #Hope math is easy J
  10. Create volume using "vxassist -g cfsdg make cfsvol 984M" and verify volume creation from "vxprint –hrt". Check for the line
    "sd disk01-01 cfsvol-01 disk01 0 2015232 0 disk_2 ENA"
  11. Create file system on new volume using
    "mkfs -t vxfs /dev/vx/rdsk/cfsdg/cfsvol" make sure you use right FStype switch and it varies across UNIX platforms
  12. Make new shared DG part of CFS using
    "/opt/VRTS/bin/cfsdgadm add cfsdg all=sw"
    Disk Group is being added to cluster configuration...
  13. Verify the above with
    [root@guest1 bin]# /opt/VRTS/bin/cfsdgadm display cfsdg

    NODE NAME ACTIVATION MODE
    guest1 sw
    guest3 sw
  14. Next is to add this volumes/filesystems to the cluster configuration so they can be mounted on any or all nodes. Mount points will be automatically created
    [root@guest1 bin]# /opt/VRTS/bin/cfsmntadm add cfsdg cfsvol /cfs all=cluster

    Mount Point is being added...
    /cfs added to the cluster-configuration
  15. Verify the newly added mount point with
    [root@guest1 bin]# /opt/VRTS/bin/cfscluster status


     

    Node : guest1
    Cluster Manager     : running
    CVM state         : running
    MOUNT POINT SHARED VOLUME DISK GROUP STATUS
    /cfs cfsvol cfsdg NOT MOUNTED

     


     

    Node : guest3
    Cluster Manager     : running
    CVM state         : running
    MOUNT POINT SHARED VOLUME DISK GROUP STATUS

    /cfs cfsvol cfsdg NOT MOUNTED
  16. Mount newly added mount point on both cluster nodes using
    [root@guest1 bin]# /opt/VRTS/bin/cfsmount all

    Mounting...
    [/dev/vx/dsk/cfsdg/cfsvol] mounted successfully at /cfs on guest1
    Mounting...
    [/dev/vx/dsk/cfsdg/cfsvol] mounted successfully at /cfs on guest3
  17. Check cfscluster status
    [root@guest1 bin]# /opt/VRTS/bin/cfscluster status

     

    Node : guest1
    Cluster Manager : running
    CVM state : running
    MOUNT POINT SHARED VOLUME DISK GROUP STATUS

    /cfs cfsvol cfsdg MOUNTED


     


     

    Node : guest3
    Cluster Manager : running
    CVM state : running
    MOUNT POINT SHARED VOLUME DISK GROUP STATUS

    /cfs cfsvol cfsdg MOUNTED
  18. Verify local mounts as
    #df –h
    /dev/vx/dsk/cfsdg/cfsvol
    984M 36M 890M 4% /cfs

     


     

    Congratulations! CFS cluster and CFS file system is configured and ready to for Active-Active usage (on both nodes in my case).

Un-configure CFS cluster and mount point

Un-configure CFS cluster and mount point

Summary

I have two hosts
Ø  Guest1
Ø  Guest3
Configured in VCS and CFS. There is one mount point /cfs configured in cfs cluster “cfs_cluster”. My ./cfscluster status output goest here:
[root@guest1 ~]# /opt/VRTS/bin/cfscluster status
  Node                      :  guest1
  Cluster Manager  :  running
  CVM state             :  running
  MOUNT POINT    SHARED VOLUME                  DISK GROUP        STATUS
  /cfs                        cfsvol                                      cfsdg                      MOUNTED      
  Node                      :  guest3
  Cluster Manager  :  running
  CVM state             :  running
  MOUNT POINT      SHARED VOLUME                 DISK GROUP        STATUS
  /cfs                        cfsvol                                       cfsdg                      MOUNTED

Steps

1-     Find master node in cluster – run following command
a.     Vxdctl –c mode
2-     Check if cfs mount point is visible in current mounted volumes
a.     #df –h
3-     Check cluster status by running
a.     /opt/VRTSvcs/bin/hastatus -sum

-- SYSTEM STATE
-- System               State                Frozen             

A  guest1               RUNNING              0                   
A  guest3               RUNNING              0                   

-- GROUP STATE
-- Group           System               Probed     AutoDisabled    State         

B  ClusterService  guest1               Y          N               ONLINE        
B  ClusterService  guest3               Y          N               OFFLINE       
4-     Check CFS cluster status.
a.     /opt/VRTS/bin/cfscluster status #Output is already shared in summary
5-     Check current disk and DG configuration by running
a.     #vxdisk list
b.     #vxprinthrt
6-     Umount the CFS volume
a.     /opt/VRTSvxfs/cfs/bin/cfsumount /cfs
Unmounting...
  /cfs got successfully unmounted from guest1
  /cfs got successfully unmounted from guest3       
7-     Delete the mount point from CFS cluster configuration
a.     /opt/VRTS/bin/cfsmntadm delete /cfs
   Mount Point is being removed...
  /cfs deleted successfully from cluster-configuration
8-     Un-configure the CFS cluster
a.     ./cfscluster unconfig
  Unconfiguring cluster setup between nodes: [guest1guest3]
VCS WARNING V-16-1-50035 No Group dependencies are configured
  Vxfsckd removed from cluster configuration
  Cluster Mount resource definition removed from cluster configuration
  Vxfsckd resource definition removed from cluster configuration

 
Deleting the cvm configuration.

  CVM configuration deleted.
  cfscluster: Cluster unconfigured successfully
9-     Check CFS cluster status
a.     [root@guest1 bin]# ./cfscluster status
  NODE         CLUSTER MANAGER STATE            CVM STATE
  guest1          running                                            not-running                   
  guest3          running                                            not-running                   

  Error: V-35-41: Cluster not configured for data sharing application
10-   Check VCS status as:
a.     /opt/VRTSvcs/bin/hastatus -sum

-- SYSTEM STATE
-- System               State                Frozen             
-------------              -------------        -----------
A  guest1               RUNNING              0                   
A  guest3               RUNNING              0                   

-- GROUP STATE
-- Group           System               Probed     AutoDisabled    State         

B  ClusterService  guest1               Y          N               ONLINE        
B  ClusterService  guest3               Y          N               OFFLINE