Setting up server-based I/O fencing using installsfcfsha

You can configure server-based I/O fencing for the Storage Foundation Cluster File System High Availability cluster using the installsfcfsha.

With server-based fencing, you can have the coordination points in your configuration as follows:

See About planning to configure I/O fencing.

See Recommended CP server configurations.

This section covers the following example procedures:

Mix of CP servers and coordinator disks

See “To configure server-based fencing for the Storage Foundation Cluster File System High Availability cluster (one CP server and two coordinator disks)”.

Single CP server

See “To configure server-based fencing for the Storage Foundation Cluster File System High Availability cluster (single CP server)”.

To configure server-based fencing for the Storage Foundation Cluster File System High Availability cluster (one CP server and two coordinator disks)

  1. Depending on the server-based configuration model in your setup, make sure of the following:

    • CP servers are configured and are reachable from the Storage Foundation Cluster File System High Availability cluster. The Storage Foundation Cluster File System High Availability cluster is also referred to as the application cluster or the client cluster.

      See Setting up the CP server.

    • The coordination disks are verified for SCSI3-PR compliance.

      See Checking shared disks for I/O fencing.

  2. Start the installsfcfsha with the -fencing option.
    # /opt/VRTS/install/installsfcfsha -fencing

    The installsfcfsha starts with a copyright message and verifies the cluster information.

    Note the location of log files which you can access in the event of any problem with the configuration process.

  3. Confirm that you want to proceed with the I/O fencing configuration at the prompt.

    The program checks that the local node running the script can communicate with remote nodes and checks whether Storage Foundation Cluster File System High Availability 6.0 is configured properly.

  4. Review the I/O fencing configuration options that the program presents. Type 1 to configure server-based I/O fencing.
    Select the fencing mechanism to be configured in this
    Application Cluster [1-4,b,q] 1
  5. Make sure that the storage supports SCSI3-PR, and answer y at the following prompt.
    Does your storage environment support SCSI3 PR? [y,n,q] (y)
  6. Provide the following details about the coordination points at the installer prompt:

    • Enter the total number of coordination points including both servers and disks. This number should be at least 3.

      Enter the total number of co-ordination points including both 
      Coordination Point servers and disks: [b] (3)
    • Enter the total number of coordinator disks among the coordination points.

      Enter the total number of disks among these: 
      [b] (0) 2
  7. Provide the following CP server details at the installer prompt:

    • Enter the total number of virtual IP addresses or the total number of fully qualified host names for each of the CP servers.

      Enter the total number of Virtual IP addresses or fully 
      qualified host name for the 
      Coordination Point Server #1: [b,q,?] (1) 2
    • Enter the virtual IP addresses or the fully qualified host name for each of the CP servers. The installer assumes these values to be identical as viewed from all the application cluster nodes.

      Enter the Virtual IP address or fully qualified host name
      #1 for the Coordination Point Server #1: 
      [b] 10.209.80.197

      The installer prompts for this information for the number of virtual IP addresses you want to configure for each CP server.

    • Enter the port that the CP server would be listening on.

      Enter the port in the range [49152, 65535] which the 
      Coordination Point Server 10.209.80.197 
      would be listening on or simply accept the default port suggested: 
      [b] (14250)
  8. Provide the following coordinator disks-related details at the installer prompt:

    • Enter the I/O fencing disk policy for the coordinator disks.

      Enter disk policy for the disk(s) (raw/dmp): 
      [b,q,?] raw
    • Choose the coordinator disks from the list of available disks that the installer displays. Ensure that the disk you choose is available from all the Storage Foundation Cluster File System High Availability (application cluster) nodes.

      The number of times that the installer asks you to choose the disks depends on the information that you provided in step 6. For example, if you had chosen to configure two coordinator disks, the installer asks you to choose the first disk and then the second disk:

      Select disk number 1 for co-ordination point
      
      1) rhdisk75
      2) rhdisk76
      3) rhdisk77
      
      Please enter a valid disk which is available from all the 
      cluster nodes for co-ordination point [1-3,q] 1
    • If you have not already checked the disks for SCSI-3 PR compliance in step 1, check the disks now.

      The installer displays a message that recommends you to verify the disks in another window and then return to this configuration procedure.

      Press Enter to continue, and confirm your disk selection at the installer prompt.

    • Enter a disk group name for the coordinator disks or accept the default.

      Enter the disk group name for coordinating disk(s): 
      [b] (vxfencoorddg) 
  9. Verify and confirm the coordination points information for the fencing configuration.

    For example:

    Total number of coordination points being used: 3
    Coordination Point Server ([VIP or FQHN]:Port): 
        1. 10.109.80.197 ([10.109.80.197]:14250)
    SCSI-3 disks:
        1. rhdisk75
        2. rhdisk76
    Disk Group name for the disks in customized fencing: vxfencoorddg
    Disk policy used for customized fencing: raw

    The installer initializes the disks and the disk group and deports the disk group on the Storage Foundation Cluster File System High Availability (application cluster) node.

  10. If the CP server is configured for security, the installer sets up secure communication between the CP server and the Storage Foundation Cluster File System High Availability (application cluster).

    After the installer establishes trust between the authentication brokers of the CP servers and the application cluster nodes, press Enter to continue.

  11. Verify and confirm the I/O fencing configuration information.
    CPS Admin utility location: /opt/VRTScps/bin/cpsadm     
    Cluster ID: 2122
    Cluster Name: clus1
    UUID for the above cluster: {ae5e589a-1dd1-11b2-dd44-00144f79240c}
  12. Review the output as the installer updates the application cluster information on each of the CP servers to ensure connectivity between them. The installer then populates the /etc/vxfenmode file with the appropriate details in each of the application cluster nodes.
    Updating client cluster information on Coordination Point Server 10.210.80.197
    
    Adding the client cluster to the Coordination Point Server 10.210.80.197 .......... Done
    
    Registering client node galaxy with Coordination Point Server 10.210.80.197...... Done
    Adding CPClient user for communicating to Coordination Point Server 10.210.80.197 .... Done
    Adding cluster clus1 to the CPClient user on Coordination Point Server 10.210.80.197 .. Done
    
    Registering client node nebula with Coordination Point Server 10.210.80.197 ..... Done
    Adding CPClient user for communicating to Coordination Point Server 10.210.80.197 .... Done
    Adding cluster clus1 to the CPClient user on Coordination Point Server 10.210.80.197 ..Done
    
    Updating /etc/vxfenmode file on galaxy .................................. Done
    Updating /etc/vxfenmode file on nebula ......... ........................ Done

    See About I/O fencing configuration files.

  13. Review the output as the installer stops and restarts the VCS and the fencing processes on each application cluster node, and completes the I/O fencing configuration.
  14. Configure the CP agent on the Storage Foundation Cluster File System High Availability (application cluster).
    Do you want to configure Coordination Point Agent on 
    the client cluster? [y,n,q] (y) 
    
    Enter a non-existing name for the service group for 
    Coordination Point Agent: [b] (vxfen) 
        
    Adding Coordination Point Agent via galaxy .... Done
  15. Note the location of the configuration log files, summary files, and response files that the installer displays for later use.

To configure server-based fencing for the Storage Foundation Cluster File System High Availability cluster (single CP server)

  1. Make sure that the CP server is configured and is reachable from the Storage Foundation Cluster File System High Availability cluster. The Storage Foundation Cluster File System High Availability cluster is also referred to as the application cluster or the client cluster.

    See Setting up the CP server.

  2. Start the installsfcfsha with -fencing option.
    # /opt/VRTS/install/installsfcfsha -fencing

    The installsfcfsha starts with a copyright message and verifies the cluster information.

    Note the location of log files which you can access in the event of any problem with the configuration process.

  3. Confirm that you want to proceed with the I/O fencing configuration at the prompt.

    The program checks that the local node running the script can communicate with remote nodes and checks whether Storage Foundation Cluster File System High Availability 6.0 is configured properly.

  4. Review the I/O fencing configuration options that the program presents. Type 1 to configure server-based I/O fencing.
    Select the fencing mechanism to be configured in this
    Application Cluster [1-4,b,q] 1
  5. Make sure that the storage supports SCSI3-PR, and answer y at the following prompt.
    Does your storage environment support SCSI3 PR? [y,n,q] (y)
  6. Enter the total number of coordination points as 1.
    Enter the total number of co-ordination points including both 
    Coordination Point servers and disks: [b] (3) 1

    Read the installer warning carefully before you proceed with the configuration.

  7. Provide the following CP server details at the installer prompt:

    • Enter the total number of virtual IP addresses or the total numner of fully qualified host names for each of the CP servers.

      Enter the total number of Virtual IP addresses or fully 
      qualified host name for the 
      Coordination Point Server #1: [b,q,?] (1) 2
    • Enter the virtual IP address or the fully qualified host name for the CP server. The installer assumes these values to be identical as viewed from all the application cluster nodes.

      Enter the Virtual IP address or fully qualified host name
      #1 for the Coordination Point Server #1: 
      [b] 10.209.80.197

      The installer prompts for this information for the number of virtual IP addresses you want to configure for each CP server.

    • Enter the port that the CP server would be listening on.

      Enter the port in the range [49152, 65535] which the 
      Coordination Point Server 10.209.80.197 
      would be listening on or simply accept the default 
      port suggested: [b] (14250)
  8. Verify and confirm the coordination points information for the fencing configuration.

    For example:

    Total number of coordination points being used: 1
    Coordination Point Server ([VIP or FQHN]:Port): 
        1. 10.109.80.197 ([10.109.80.197]:14250)
  9. If the CP server is configured for security, the installer sets up secure communication between the CP server and the Storage Foundation Cluster File System High Availability (application cluster).

    After the installer establishes trust between the authentication brokers of the CP servers and the application cluster nodes, press Enter to continue.

  10. Verify and confirm the I/O fencing configuration information.
    CPS Admin utility location: /opt/VRTScps/bin/cpsadm     
    Cluster ID: 2122
    Cluster Name: clus1
    UUID for the above cluster: {ae5e589a-1dd1-11b2-dd44-00144f79240c}
  11. Review the output as the installer updates the application cluster information on each of the CP servers to ensure connectivity between them. The installer then populates the /etc/vxfenmode file with the appropriate details in each of the application cluster nodes.

    The installer also populates the /etc/vxfenmode file with the entry single_cp=1 for such single CP server fencing configuration.

    Updating client cluster information on Coordination Point Server 10.210.80.197
    
    Adding the client cluster to the Coordination Point Server 10.210.80.197 .......... Done
    
    Registering client node galaxy with Coordination Point Server 10.210.80.197...... Done
    Adding CPClient user for communicating to Coordination Point Server 10.210.80.197 .... Done
    Adding cluster clus1 to the CPClient user on Coordination Point Server 10.210.80.197 .. Done
    
    Registering client node nebula with Coordination Point Server 10.210.80.197 ..... Done
    Adding CPClient user for communicating to Coordination Point Server 10.210.80.197 .... Done
    Adding cluster clus1 to the CPClient user on Coordination Point Server 10.210.80.197 .. Done
    
    Updating /etc/vxfenmode file on galaxy .................................. Done
    Updating /etc/vxfenmode file on nebula ......... ........................ Done

    See About I/O fencing configuration files.

  12. Review the output as the installer stops and restarts the VCS and the fencing processes on each application cluster node, and completes the I/O fencing configuration.
  13. Configure the CP agent on the Storage Foundation Cluster File System High Availability (application cluster).
    Do you want to configure Coordination Point Agent on the 
    client cluster? [y,n,q] (y) 
    
    Enter a non-existing name for the service group for 
    Coordination Point Agent: [b] (vxfen) 
        
    Adding Coordination Point Agent via galaxy ... Done
  14. Note the location of the configuration log files, summary files, and response files that the installer displays for later use.