[+]
SORT Support
[+]
SORT Suggestions
/ Downloads / Patches / Patch Detail
This page lists publically-released patches for Symantec Enterprise Products.
For information on private patches, contact Symantec Technical Support.
For NetBackup Enterprise Server and NetBackup Server patches, see the NetBackup Downloads.
Patches for your product can have a variety of names. These names are based on product, component, or package names. For more information on patch naming conventions and the relationship between products, components, and packages, see the SORT online help.
sfw-win-CP16_SFW_51SP2
Obsolete
The latest patch(es) : sfw-win-CP20_SFW_51SP2 
Sign in if you want to rate this patch.

 Basic information
Patch type: P-patch
Release date: 2012-11-07
Technote: TECH173500 - Storage Foundation for Windows High Availability, Storage Foundation for Windows and Veritas Cluster Server 5.1 Service Pack 2 Cumulative Patches
Documentation: None
Popularity: 879 viewed    322 downloaded
Download size: 231.11 MB
Checksum: 96419269

 Applies to one or more of the following products:
Storage Foundation 5.1SP2 On Windows 32-bit
Storage Foundation 5.1SP2 On Windows IA64
Storage Foundation 5.1SP2 On Windows x64

 Obsolete patches, incompatibilities, superseded patches, or other requirements:

This patch is obsolete. It is superseded by: Release date
sfw-win-CP20_SFW_51SP2 2014-01-28
sfw-win-CP19_SFW_51SP2A (obsolete) 2013-08-26
sfw-win-CP18_SFW_51SP2 (obsolete) 2013-05-27
sfw-win-CP17_SFW_51SP2 (obsolete) 2013-03-01

This patch supersedes the following patches: Release date
sfw-win-CP15_SFW_51SP2 (obsolete) 2012-10-29
sfw-CP14_SFW_51SP2 (obsolete) 2012-07-30

 Fixes the following incidents:
2087139, 2114928, 2203640, 2207263, 2210586, 2218963, 2225878, 2244093, 2245816, 2258124, 2267265, 2270478, 2290214, 2318276, 2321015, 2327428, 2329130, 2330902, 2364591, 2368399, 2371250, 2372049, 2372164, 2376010, 2397382, 2400260, 2406683, 2415517, 2426197, 2440099, 2477520, 2512482, 2530236, 2535885, 2536009, 2536342, 2554039, 2564914, 2570602, 2587638, 2604814, 2610786, 2614448, 2635097, 2643293, 2670150, 2676164, 2683797, 2711856, 2738430, 2740872, 2766206, 2834385, 2851054, 2860196, 2860593, 2864040, 2894296, 2905123, 2905178, 2911830, 2913240, 2914038, 2928801

 Patch ID:
None.

 Readme file  [Save As...]
Date: 2012-11-07
OS: Windows
OS Version: 2003, 2008, 2008 R2
Packages:

==============================================================================
Architecture/OS  Windows Server 2003 		Windows Server 2008 / 2008 R2
==============================================================================
x86 	         CP16_SFW_51SP2_W2k3_x86.exe 	CP16_SFW_51SP2_W2k8_x86.exe
------------------------------------------------------------------------------
x64 	         CP16_SFW_51SP2_W2k3_x64.exe 	CP16_SFW_51SP2_W2k8_x64.exe
------------------------------------------------------------------------------
ia64 	         CP16_SFW_51SP2_W2k3_ia64.exe 	CP16_SFW_51SP2_W2k8_ia64.exe
------------------------------------------------------------------------------

Etrack Incidents: 
2270478, 2371250, 2329130, 2258124, 2244093, 2114928, 2210586, 2203640, 2207263, 2245816, 2267265,
2087139, 2290214, 2321015, 2330902, 2318276, 2364591, 2368399, 2397382, 2406683, 2218963, 2225878,
2426197, 2440099, 2477520, 2376010, 2415517, 2512482, 2372049, 2400260, 2554039, 2536342, 2564914,
2372164, 2536009, 2530236, 2570602, 2604814, 2614448, 2610786, 2587638, 2535885, 2635097, 2643293, 
2670150, 2676164, 2683797, 2711856, 2740872, 2327428, 2738430, 2766206, 2834385, 2851054, 2864040, 
2894296, 2905123, 2914038, 2911830, 2928801, 2860593, 2860196, 2905178, 2913240 



Fixes Applied for Products
==========================|

Storage Foundation (SFW) 5.1 SP2 for Windows

Install instructions
====================|

Download the appropriate cumulative patch (CP) executable file to a temporary location on your system. You can install the CP in a verbose mode or in a non-verbose mode. Instructions for both options are provided below.

Each CP includes the individual hotfixes that contain enhancements and fixes related to reported issues.
See "Errors/Problems Fixed" section for details.

Before you begin
----------------:

[1] In case of Windows Server 2003, this hotfix requires Microsoft Core XML Services (MSXML) 6.0 pre-installed in your setup. Download and install MSXML 6.0 before installing the hotfix.
Refer to the following link for more information:
http://www.microsoft.com/downloads/details.aspx?FamilyId=993c0bcf-3bcf-4009-be21-27e85e1857b1&displaylang=en

Microsoft posted service pack and/or security updates for Core XML Services 6.0. Please contact Microsoft or refer to Microsoft website to download and install latest updates to Core XML Services 6.0.

Refer to the following link for more information:
http://www.microsoft.com/downloads/details.aspx?FamilyId=70C92E77-9E5A-41B1-A9D2-64443913C976&displaylang=en

[2] Ensure that the logged-on user has the following privileges to install the CP on the systems:
	- Local administrator privileges
	- Debug privileges

[3] One or more hotfixes that are included with this CP may require a reboot.
Before proceeding with the installation ensure that the system can be rebooted.

[4] Symantec recommends that you close the Cluster Manager (Java Console) and the Veritas Enterprise Administrator (VEA) Console before installing this CP.

[5] One or more hotfixes that are included in this hotfix may require stopping the Veritas Storage Agent (vxvm) service. This causes the Volume Manager Disk Group (VMDg) resources in a cluster environment to fault.
Before proceeding with the installation, ensure that the cluster disk groups that contain the VMDg resource are taken offline or moved to another node in the cluster.

[6] Ensure that you close the Windows Event Viewer before proceeding with the installation.

[7] Hotfix_5_1_20012_88_2087139a may fail to install due to some stray rhs.exe processes that keep running even after the cluster service has been stopped. In such a case, you should manually terminate all the running rhs.exe processes, confirm that the clussvc service is stopped, and then retry installing the hotfix.

[8] Hotfix_5_1_20048_87_2670150 installation requires stopping the Storage Agent (vxvm) service which will cause the 'Volume Manager Disk Group' (VMDg) resources in a cluster environment (MSCS or VCS) to fault. If this hotfix is being applied to a server in cluster, make sure any cluster groups containing a VMDg resource are taken offline or moved to another node in the cluster before proceeding.

[10] Hotfix_5_1_20058_88_2766206 installation requires stopping the Storage Agent (vxvm) service, which will cause the Volume Manager Disk Group (VMDg) resources in a cluster environment (MSCS or VCS) to fault. If this hotfix is being applied to a server in a cluster, make sure any cluster groups containing a VMDg resource are taken offline or moved to another node in the cluster before proceeding. You should install the latest CP before installing this hotfix.



To install in the verbose mode
------------------------------:

In the verbose mode, the cumulative patch (CP) installer prompts you for inputs and displays the installation progress status in the command window.

Perform the following steps:

[1] Double-click the CP executable file to extract the contents to a default location on the system.
The installer displays a list of hotfixes that are included in the CP.
	- On 32-bit systems, the hotfixes executable files are extracted to:
	  "%commonprogramfiles%\Veritas Shared\WxRTPrivates\<CPName>"
	- On 64-bit systems, the hotfixes executable files are extracted to:
	  "%commonprogramfiles(x86)%\Veritas Shared\WxRTPrivates\<CPName>"

The installer also lists the hotfixes that require a reboot of the system after the installation. If system reboot is not an option at this time, you can choose not to install these hotfixes. In such a case, exit the installation and then launch the CP installer again from the command line using the /exclude option.
See "To install in a non-verbose (silent) mode" section for the syntax.

[2] When the installer prompts whether you want to continue with the installation; type Y to begin the hotfix installation.
The installer performs the following tasks:
	- Extracts all the individual hotfix executable files
	  On 32-bit systems the files are extracted at %commonprogramfiles%\Veritas Shared\WxRTPrivates\<HotfixName>
        On 64-bit systems the files are extracted at %commonprogramfiles(x86)%\Veritas Shared\WxRTPrivates\<HotfixName>
	- Runs the pre-install tasks
	- Installs all the hotfixes sequentially
	- Runs the post-install tasks
The installation progress status is displayed in the command window.

[3] After all the hotfixes are installed, the installer prompts you to restart the system.
Type Y to restart the system immediately, or type N to restart the system later. You must restart the system for the changes to take effect.

Note that the installer prompts for a system restart only if hotfixes that require a reboot are included in the CP and are installed.

To install in the non-verbose (silent) mode
-------------------------------------------:

In the non-verbose (silent) mode, the cumulative patch (CP) installer does not prompt you for inputs and directly proceeds with the installation tasks. The installer displays the installation progress status in the command window.

Use the VxHFBatchInstaller.exe utility to install a CP from the command line.
The syntax options for this utility are as follows:

vxhfbatchinstaller.exe /CP:<CPName> [/Exclude:<HF1.exe>,<HF2.exe>...] [/PreInstallScript:<PreInstallScript.pl>] [/silent [/forcerestart]]

where,
	- CPName is the cumulative patch executable file name without the platform, architecture, and .exe extension.
For example, if CP executable name is CP16_SFW_51SP2_W2K8_x64.exe, specify it as CP16_SFW_51SP2.

	- HF1.exe, HF2.exe,... represent the executable file names of the hotfixes that you wish to exclude from the installation. Note that the file names are separated by commas, with no space after a comma. The CP installer skips the mentioned hotfixes during the installation.

	- PreInstallScript.pl is the Perl script that includes the pre-installation steps. These steps forcefully kill the required services and processes in case a graceful stop request does not succeed.
    Symantec recommends that you use this option and script only in case the CP installer fails repeatedly while performing the pre-installation tasks.

	- /silent indicates the installation is run in a non-verbose mode; the installer does not prompt for any inputs during the installation.

	- /forcerestart indicates that the system is automatically restarted, if required, after the installation is complete.


Perform the following steps:

[1] From the command prompt, navigate to the directory where the CP executable file is located and then run the file to extract the contents to a default location on the system. The installer displays a list of hotfixes that are included in the CP.
	- On 32-bit systems, the hotfixes executable files are extracted to:
	  "%commonprogramfiles%\Veritas Shared\WxRTPrivates\<CPName>"
	- On 64-bit systems, the hotfixes executable files are extracted to:
	  "%commonprogramfiles(x86)%\Veritas Shared\WxRTPrivates\<CPName>"

The installer also lists the hotfixes that require a reboot of the system after the installation. If system reboot is not an option at this time, you can choose not to install these hotfixes. In such a case, launch the CP installer from the command line using the /exclude option.

[2] When the installer prompts whether you want to continue with the installation; type N to exit the installer.

[3] In the same command window, run the following command to begin the CP installation in the non-verbose mode:
vxhfbatchinstaller.exe /CP:<CPName> /silent

For example, to install a SFW 5.1 SP2 x64 CP for Windows Server 2008, the command is:
vxhfbatchinstaller.exe /CP:CP16_SFW_51SP2 /silent

The installer performs the following tasks:

	- Extracts all the individual hotfix executable files
	  On 32-bit systems the files are extracted at %commonprogramfiles%\Veritas Shared\WxRTPrivates\<HotfixName>
        On 64-bit systems the files are extracted at %commonprogramfiles(x86)%\Veritas Shared\WxRTPrivates\<HotfixName>
	- Runs the pre-install tasks
	- Installs all the hotfixes sequentially
	- Runs the post-install tasks
The installation progress status is displayed in the command window.

[4] After all the hotfixes are installed, the installer displays a message for restarting the system.
You must restart the system for the changes to take effect.

Note that the installer prompts for a system restart only if hotfixes that require a reboot are included in the CP and are installed. The installer automatically restarts the system if you had specified the /forcerestart option in step 3 earlier.

VxHFBatchInstaller usage examples
---------------------------------:

[+] Install CP in silent mode, exclude hotfixes Hotfix_5_1_20014_87_2321015_w2k8_x64.exe and Hotfix_5_1_20018_87_2318276_w2k8_x64.exe:

vxhfbatchinstaller.exe /CP:CP16_SFW_51SP2 /Exclude:Hotfix_5_1_20014_87_2321015_w2k8_x64.exe, Hotfix_5_1_20018_87_2318276_w2k8_x64.exe /silent

[+] Install CP in silent mode, restart automatically:

vxhfbatchinstaller.exe /CP:CP16_SFW_51SP2 /silent /forcerestart

What's new in this CP
=====================|


The following hotfixes have been added in this CP:
 - Hotfix_5_1_20068_87_2913240

For more information about these hotfixes, see the "Errors/Problems Fixed" section in this readme.

Errors/Problems Fixed
=====================|

The fixes and enhancements that are included in this cumulative patch (CP) are as follows:

[1] Hotfix name: Hotfix_5_1_20001_2203640

Symptom:
This hotfix addresses a Volume Manager plug-in issue due to which the DR wizard is unable to discover a cluster node during configuration.

Description:
While creating a disaster recovery configuration, the Disaster Recovery Configuration Wizard fails to discover the cluster node at the primary site where the service group is online.

You may see the following error on the System Selection page:
V-52410-49479-116
An unexpected exception: 'Attempted to read or write protected memory. This is often an indication that other memory is corrupt. 'Failed to discover 'Veritas Volume Manager' on node '<primarysitenodename>'.

This issue occurs because the Volume Manager plug-in attempts to free unallocated memory and fails, resulting in a memory crash.

Resolution:
The code fix in the Volume Manager plug-in addresses the memory crash.

Binary / Version:
VMPlugin.dll / 5.1.20001.105

-------------------------------------------------------+  

[2] Hotfix name: Hotfix_5_1_20001_87_2207263

Symptom:
This hotfix addresses a deadlock issue where disk group deport hangs after taking a backup from NBU.

Description:
Disk group deport operation hangs due to deadlock situation in the storage agent. The VDS provider makes PRcall to other providers after acquiring the Veritas Enterprise Administrator (VEA) database lock.

Resolution:
Before making a PRcall to other providers, release the VEA database lock.

Binary / Version:
vdsprov.dll / 5.1.20001.87
 
-------------------------------------------------------+ 

[3] Hotfix name: Hotfix_5_1_20009_87_2245816

Symptom:
Issue with Volume turning RAW due to Write failure in fsys.dll module

Description:
This hotfix updates fsys provider, to handle ntfs shrink bug. Windows Operating System by default fails any sector level reads/writes greater than 32MB on Windows Server 2003. Hence, IOs are split into multiple IO for the WriteSector function in fsys provider.

Resolution:
Split IO into multiple IOs.

Binary / Version:
fsys.dll / 5.1.20009.87

-------------------------------------------------------+  

[4] Hotfix name: Hotfix_5_1_20011_87_2267265

Symptom:
This hotfix addresses the following issues:
Issue 1
Attempting to bring the VMDg resource online during MoveGroup operation on a Windows Failover Cluster (WFC) results in an RHS deadlock timeout error: RHS] RhsCall::DeadlockMonitor: Call ONLINERESOURCE timed out for resource.

Issue 2
Request to port implementation of CLUSCTL_RESOURCE_STORAGE_GET_MOUNTPOINTS to Storage Foundation for Windows 5.1 Service Pack 2.

Description:
Issue 1
During VMDg resource online attempt, it appears that the Resource Monitor (RHS) fails to receive notification resulting in an RHS deadlock timeout error: 
[RHS] RhsCall::DeadlockMonitor: Call ONLINERESOURCE timed out for resource

Resources were timed out due to delayed offline and online operation. Client access list is created by communicating with cluster services after online and offline operations. To open a connection with cluster, OpenCluster API by Microsoft is used. Delay in cluster offline and online occurred as OpenCluster API was taking too much time.

Issue 2
Distributed File System Replication (DFSR) in a Microsoft cluster is not working properly with the Volume Manager Disk Group (VMDg) resource.

The DFSR resource was modified to identify and pick up third-party disk resources while building volume ID table; however, the volume path is not getting validated when processing the volume. 

The path names for the disk resource is fetched from the disk resource handle using the resource control: CLUSCTL_RESOURCE_STORAGE_GET_MOUNTPOINTS. This control code should return list of path names hosted for the specified disk partition.

Resolution:
Issue 1
Earlier, RPC mechanism was used with OpenCluster API. Resolved the issue by using LPC mechanism with OpenCluster API.

Issue 2
Added code to properly handle the resource control CLUSCTL_RESOURCE_STORAGE_GET_MOUNTPOINTS.

Binary / Version:
vxres.dll / 5.1.20011.87

-------------------------------------------------------+

[5] Hotfix name: Hotfix_5_1_20012_88_2087139a

Symptom:
While trying to stop the vxvm service, hotfix installer intermittently fails to stop the service.

Description:
Hotfix installer fails to perform prerequisite operations intermittently. Failure occurs when the installer tries to stop the vxvm service and reports error. When QUERY VXVM is used to query the state of service, status is shown as stopped.

While stopping the vxvm service, iSCSI and Scheduler providers perform certain operations which result in failure.

Resolution:
To abort the operations in iSCSI and Scheduler providers while performing the stop operation for the vxvm service.

Binary / Version:
iscsi.dll / 5.1.20012.88
scheduler.dll / 5.1.20012.88

-------------------------------------------------------+

[6] Hotfix name: Hotfix_5_1_20012_88_2087139b

Symptom:
While trying to stop the vxvm service, hotfix installer intermittently fails to stop the service.

Description:
Hotfix installer fails to perform prerequisite operations intermittently. Failure occurs when the installer tries to stop the vxvm service and reports error. When QUERY VXVM is used to query the state of service, status is shown as stopped.

While stopping the vxvm service, iSCSI and Scheduler providers perform certain operations which result in failure.

Resolution:
To abort the operations in iSCSI and Scheduler providers while performing the stop operation for the vxvm service.

Binary / Version:
cluster.dll / 5.1.20012.88

-------------------------------------------------------+ 

[7] Hotfix name: Hotfix_5_1_20013_87_2290214

Symptom:
This hotfix addresses an issue in the SFW component, Veritas VxBridge Service (VxBridge.exe), that causes a memory corruption or a crash in a VxBridge client process in the clustering environment.

Description:
In a clustering environment, a VxBridge client process may either crash or there could be a memory corruption. This occurs because VxBridge.exe tries to read beyond the memory allocated to a [in, out, string] parameter by its client.

As a result, the cluster may become unresponsive. Users may not be able to access the clustered applications and cluster administrators may not be able to connect to the cluster using the Cluster Management console.

Resolution:
This issue is fixed in VxBridge.exe process.
Instead of allocating the maximum memory, VxBridge.exe now allocates a minimum essential required amount of memory to the [in, out, string] parameters.
Because of the minimum memory requirement, any excess memory allocated by the VxBridge clients does not cause any issues.

Binary / Version:
VxBridge.exe / 5.1.20013.87

-------------------------------------------------------+  

[8] Hotfix name: Hotfix_5_1_20014_87_2321015

Symptom:
This hotfix fixes bug check 0x3B in VXIO.

Description:
The bug check 0x3B may happen when removing a disk of a cluster dynamic disk group.

Resolution:
This hotfix fixes bug check 0x3B in VXIO.

Binary / Version:
vxio.sys / 5.1.20014.87

-------------------------------------------------------+  

[9] Hotfix name: Hotfix_5_1_20018_87_2318276

Symptom:
This hotfix replaces a fix for an issue in vxio which breaks Network Address Translation (NAT) support in VVR. It also removes the limitation of using different private IP addresses on the primary and secondary host in a NAT environment.

Description:
VVR sends heartbeats to the remote node only if the local IP address mentioned on the RLINK is online on that node.

In a NAT environment, the primary host communicates with the NAT IP of the secondary, and since, the NAT IP is never online on the secondary node, VVR secondary does not send heartbeats to the primary host. This results in primary not sending a connection request to the secondary.

It is also observed that in a NAT environment when the primary's private IP address is same as the secondary's private IP, VVR incorrectly concludes that the primary and secondary nodes are one and the same.

Resolution:
Removed the check which made it mandatory for the local IP address to be online on a node. Also, fixed the issue which prevents configuring VVR in a NAT environment when the private IP addresses of the primary and secondary hosts are same.

Binary / Version:
vras.dll / 5.1.20018.87
vxio.sys / 5.1.20018.87

-------------------------------------------------------+

[10] Hotfix name: Hotfix_5_1_20020_87_2364591

Symptom:

This hotfix addresses the following issues:

Issue 1
This hotfix adds Thin Provisioning Reclaim support for EMC VMAX array.

Issue 2
This hotfix addresses an issue of Storage Agent crash when SCSI enquiries made to disks fail.

Issue 3
Mirror creation failed with auto selection of disk and track alignment enabled, even though enough space was there.

Description:
Issue 1
Added Thin Provisioning Reclaim support for EMC VMAX array on Storage Foundation and High Availability for Windows (SFW HA) Service Pack 2.

Issue 2
During startup after SFW 5.1 SP2 installation, Storage Agent uses SCSI enquiries to get information from disks. In some cases it is observed that Storage Agent crashes while releasing memory for the buffer passed to collect information.

Issue 3
There was a logic error in the way we track-align a free region of disks. Small free regions may end up with negative size. Even though we might have free region large enough for the desired allocation, the negative sizes make the total free region size less and thus fails the allocation.

Resolution:
Issue 1
Made changes in DDL provider to reclaim support for EMC VMAX array as a Thin Reclaim device.

Issue 2
Aligned the buffer records on 16 byte boundaries. This ensures that the data structures passed down between native drivers and providers are in sync. Additionally, it also saves the effort of relying on data translation done by WOW64 when code is running on 32-bit emulation mode.

Issue 3
Fix the logic error in track-aligning the free space so that no regions will have negative size.

Binary / Version:
ddlprov.dll / 5_1_20020_87
pnp5.dll / 5_1_20020_87


-------------------------------------------------------+

[11] Hotfix name: Hotfix_5_1_20024_87_2368399
Symptom:
This hotfix addresses an issue where any failure while the volume shrink operation is in progress may cause file system corruption and data loss.

Description:
The volume shrink operation allows you to decrease the size of dynamic volumes.
When you start the volume shrink operation, it begins to move used blocks so as to accommodate them within the specified target shrink size for the volume.

However, this operation is not transactional. If there are any issues encountered during block moves or if the operation is halted for any reason (for example, the host reboots or shuts down, the operating system goes in to a hung state, or a stop error occurs), it can result in a file system corruption and may cause data loss.
The state of the volume changes to 'Healthy, RAW'.

Resolution:
With this hotfix, a warning message is displayed each time you initiate a volume shrink operation. The message recommends that you make a backup copy of the data on the target volume (the volume that you wish to shrink) before you perform the volume shrink operation.

Depending on how you initiate the volume shrink operation (either VEA or command line), you have to perform an additional step, as described below:

    If you initiate the volume shrink operation from the VEA console, click OK on the message prompt to proceed with the volume shrink operation.

    If you initiate the volume shrink operation from the command prompt, the command fails with the warning message. Run the command again with the force (-f) option.

Note: 
Hotfix_5_1_20024_87_2368399 has been repackaged to address an installation issue that was present in older version of this hotfix, which was released in the earlier CP.

Binary / Version:
vxassist.exe / 5.1.20024.87
climessages.dll / 5.1.20024.87
vxvmce.jar / NA
vmresourcebundle.en.jar / NA

-------------------------------------------------------+ 

[12] Hotfix name: Hotfix_5_1_20025_87_2397382

Symptom:
This hotfix addresses the following issues:
Issue 1
VVR primary hangs and a BSOD is seen on the secondary when stop and pause replication operations are performed on the configured RVGs.

Issue 2
Fixed a memory leak issue in the Veritas Volume Replicator (VVR) compression module.

Description:
Issue 1
When stopping or pausing VVR replication in TCP/IP mode, a BSOD is seen with STOP ERROR 0x12E "INVALID_MDL_RANGE".
A BSOD error occurs on the VVR secondary system due to TCP receive bug which is caused due to a mismatch between Memory Descriptor List (MDL) and the underlying buffer describing it.

Issue 2
During heavy VVR compression activity, when memory upper limit is reached, some IO compression fails during compression resulting in memory leak.

Resolution
Issue 1
The issue related to BSOD error has been fixed.

Issue 2
Fixed the memory leak issue for IO compression error scenarios.

Binary / Version:
vxio.sys / 5.1.20025.87

-------------------------------------------------------+


[13] Hotfix name: Hotfix_5_1_20005_87_2218963

Symptom:
This hotfix adds thin provisioning support for HP XP arrays to SFW 5.1 SP2.
This hotfix also fixes a data corruption problem that can happen while moving sub disks of a volume when the Smart Move mirror resync feature is enabled.

Description:
Hotfix_5_1_20005_87_2218963 adds thin provisioning support for HP XP arrays to SFW 5.1 SP2.

When there are multiple sub disks on a disk and the sub disk that is being moved is not aligned to 8 bytes, then there is a possibility of missing some disk blocks while syncing with the new location. This may result in data corruption.

Hotfix_5_1_20005_87_2218963 fixes the starting Logical Cluster Numbers used while syncing sub disk clusters so that no block is left out.

Resolution:
This hotfix adds thin provisioning support for HP XP arrays and also fixes a data corruption issue.

Binary / Version:
vxvm.dll / 5.1.20005.87
ddlprov.dll / 5.1.20005.87
vxconfig.dll / 5.1.20005.87

-------------------------------------------------------+

[14] Hotfix name: Hotfix_5_1_20028_87_2426197

Symptom:
This hotfix addresses an issue where the disks fail to appear after the storage paths are reconnected, until either the vxvm service is restarted or the system is rebooted.

Description:
This issue occurs only when SFW and EMC PowerPath 5.5 are installed on a system.

When you disconnect and reconnect disks managed using EMC PowerPath multipathing solution, the disks fail to appear on the system and are inaccessible. The VEA GUI shows the disks arrival events but the disk type is displayed as unknown and the status is offline.

This occurs because the SFW vxpal component creates access handles on the gatekeeper devices that remain open forever. Therefore when the storage is reconnected, the disks remain unreadable as the previous stale handles are not closed.

Resolution:
The open handles issue is fixed in the PnP handler logic.

Binary / Version:
pnp5.dll / 5.1.20028.87

-------------------------------------------------------+

[15] Hotfix name: Hotfix_5_1_20029_87_2440099

Symptom:
In case the original SFW product license fails, SFW now tries to find license with a different API. 

Description:
Licensing issue where SFW fails to perform basic operations due to failure in finding the installed license on a system. This occurs even when there is a valid license key installed on the system.

Resolution:
SFW will now use registry and different APIs to find licenses, if the older APIs fail.

Binary / Version:
sysprov.dll / 5.1.20029.87

-------------------------------------------------------+

[16] Hotfix name: Hotfix_5_1_20032_87_2477520

Symptom:
After installing cumulative patch 1 (CP1) over SFW 5.1 SP2, user was not able to configure dynamic cluster quorum in Microsoft Cluster.

Description:
After installing cumulative patch 1 (CP1) over SFW 5.1 SP2, user was not able to configure dynamic cluster quorum in Microsoft Cluster due to an incorrect error code being returned to the cluster service.

Resolution:
Returning the proper error code to cluster service in case storage (i.e. drive letter or volume GUID/name) belongs to the Disk Group resource but path is not valid.

Binary / Version:
vxres.dll / 5.1.20032.87

-------------------------------------------------------+

[17] Hotfix name: Hotfix_5_1_20033_87_2512482

Symptom:
This hotfix addresses an issue where cluster storage validation check fails with error 87 on a system that has SFW installed on it. 
This happens due to system volume being offline.

Description:
SFW disables the automount feature on a system which leaves the default system volume offline after each reboot. 
Cluster validation checks for access to all volumes and fails for the offline system volume with error 87.

Resolution:
Resolved the issue by making the system volume online during system boot up so that cluster validation check is successful.

Binary / Version:
vxboot.sys / 5.1.20033.87

-------------------------------------------------------+

[18] Hotfix name: Hotfix_5_1_20021_87_2372049

Symptom:
Unable to create enclosures for SUN Storage Tek (STK) 6580/6780 array.

Description:
There was no support for SUN STK 6580/6780 array in the VDID library.

Resolution:
Added code to recognize SUN STK 6580/6780 array in VDID library.

Binary / Version:
sun.dll / 5.1.20021.87


-------------------------------------------------------+

[20] Hotfix name: Hotfix_5_1_20035_87_2554039

Symptom:
This hotfix addresses the issue where an orphan task appears in VEA from MISCOP_RESCAN_TRACK_ALIGNMENT.

Description:
After importing a disk group, a task appears in the VEA task bar and never disappears. This appears to be triggered by a rescan fired from the ddlprov.dll when the VDID for a device changes.  Since it is an internal task, it probably should not appear in the VEA at all.

Resolution:
The orphan task object which was created for Rescan has been removed.


Binary / Version:
vxvm.dll / 5.1.20035.87

-------------------------------------------------------+

[21] Hotfix name: Hotfix_5_1_20037_87_2536342

Symptom:
This hotfix addresses the issue where incorrect WMI information is logged into Cluster class for Dynamic disks.

Description:
When a disk group containing all GPT disks is created, then SFW creates and publishes a signature into WMI affecting the Signature and ID fields for the mscluster_disk. This ID/Signature changes on every reboot.  If two disk groups are created with all GPT disks, then they have the same signature and ID. 
This causes issues with Microsoft's SCVVM product which uses the signature/ID to determine individual resources.  

With this configuration, as soon as a HyperV machine is put on an all GPT disk group, all the other GPT disk groups are marked as “in Use” and SCVVM is unable to use those disks for other HyperV machines.

Resolution:
A unique signature to fill into diskinfo structure of GPT disks is generated. This signature is used as the ID while populating WMI information into MSCluster_Disk WMI class.


Binary / Version:
cluscmd.dll / 5.1.20037.87

-------------------------------------------------------+

[22] Hotfix name: Hotfix_5_1_20038_87_2564914

Symptom:
This hotfix addresses the issue where VxVDS.exe process does not release handles post CP2 updates.

Description:
After the installation of 5.1 SP2 CP2, a high number of open handle counts are seen on the VxVDS Process.

Resolution:
The handle leaks have been fixed.


Binary / Version:
vxvds.exe / 5.1.20038.87

-------------------------------------------------------+

[23] Hotfix name: Hotfix_3_3_1068b_2372164

Symptom:
This hotfix addresses the issue where multiple buffer overflows occur in SFW vxsvc.exe. This results in a vulnerability that allows remote attackers to execute arbitrary code on vulnerable installations of Symantec Veritas Storage Foundation. Authentication is not required to exploit this vulnerability. 

Description:
The specific flaw exists within the vxsvc.exe process. The problem affecting the part of the server running on TCP port 2148 is an integer overflow in the function vxveautil.value_binary_unpack where a 32bit field holds a value that, through some calculation, can be used to create a smaller heap buffer than required to hold user supplied data. This can be leveraged to cause an overflow of the heap buffer, allowing the attacker to execute arbitrary code under the context of SYSTEM.

Resolution:
The issue has been addressed in this hotfix.


Binary / Version:
vxveautil.dll / 3.3.1068.0
vxvea3.dll / 3.3.1068.0
vxpal3.dll / 3.3.1068.0
-------------------------------------------------------+

[24] Hotfix name: Hotfix_5_1_20036_87_2536009

Symptom:
This hotfix addresses the following issues:

Issue 1
Mirror creation failed with auto selection of disk and track alignment enabled, even though enough space was there.

Issue 2
When using GUI or CLI to create a new volume with a mirror and DRL logging included the DRL log is track aligned and the data volume is no longer track aligned.

Issue 3
On a VVR-GCO configuration, when the primary site goes down, the application service group fails over to the secondary site. It is observed that MountV resource probes online on a failed node after a successful auto takeover operation.

Issue 4
Following issues were fixed for Storage Foundation for Windows (SFW) with DMP DSM and SFW SCSI-3 enabled setting:    
1. Reconnecting previously detached storage array in a campus cluster causes all MSCS Volume Manager Disk Group (VMDg) resources to fail and rescan operation gets hung at 28%. 
2. Deporting a cluster disk group may take a long time.

Issue 5
Data corruption while performing subdisk Move operation.

Issue 6
During Windows Failover Cluster Move Group operation, cluster disk group import fails with Error 3 (DG_FAIL_NO_MAJORITY 0x0003).

Issue 7
This hotfix addresses an issue in the Volume Manager (VM) component to support VCS hotfix Hotfix_5_1_20029_2536009.

Note:
Fixes for issues #1, #2, #3, #4, #5, #6 were released earlier as Hotfix_5_1_20026_87_2406683. It is now a part of this hotfix.

Description:
Issue 1
There was a logic error in the way we track-align a free region of disks. Small free regions may end up with negative size. Even though we might have free region large enough for the desired allocation, the negative sizes make the total free region size less and thus fails the allocation.

Issue 2
In space allocation the free space was aligned before allocating it to data plex and DRL log.  When DRL is placed first and when DRL does not have a size that is a multiple of the track size, the following data plex is not track aligned.

Issue 3
After a node crash on a VVR-GCO configuration, the mount points under VCS control are not deleted causing them to probe online on disk group (which gets auto-imported after the reboot), leading to a concurrency violation.

For global service groups, concurrency violations are not resolved by the VCS engine automatically. Hence the MountV offline is not initiated.

Issue 4
On a SFW DMP DSM environment with SFW SCSI-3 settings, reconnecting a detached disk of an imported cluster disk group can cause the SCSI-3 release reservation logic to get into a loop or operation may take a long time to get completed.

Issue 5
When there are multiple subdisks on a disk and the subdisk that is being moved is not aligned to 8 bytes,  then there is a possibility of missing some disk blocks while syncing with the new location. This may result in data corruption.

Issue 6
During Windows Failover Cluster Move Group operation, successful disk group deport happens; however, subsequent import attempt fails with error "DG_FAIL_NO_MAJORITY 0x0003." 

If disks are added to an existing cluster disk group which is online on node A and when this disk group is moved to other node B, then node B is unable to online the cluster disk group. This happens because the system view of the disk is not being synchronized with the modified disk group information and node B where the cluster disk group is moved is not able to reflect the correct information.

Issue 7
As a result of the modified MountV agent offline function, there is a possibility that the volumes can be mounted externally even after the MountV offline is complete.

Resolution:
Issue 1
Fix the logic error in track-aligning the free space so that no regions will have negative size.

Issue 2
The order of allocation has been reversed. Now, the free space is assigned to the data plex first and then to the DRL log.  Therefore, the data plex is always aligned, and the DRL log may or may not
be aligned.

Issue 3
To delete the stale mount points when a cluster disk group is imported after a node reboot.

Issue 4
Corrected the logic in vxconfig.dll to do SCSI-3 release reservation effectively.

Issue 5
Fixed the starting Logical Cluster Numbers (LCNs) used while syncing subdisk clusters so that no block is left out.

Issue 6
Updating the Windows NT disk cache for all the disks in case of unexpected number of live disks.

Issue 7
The disk group deport operation is modified to support the fix provided in VCS hotfix Hotfix_5_1_20029_2536009.

VM now dismounts and flushes the volumes cleanly during disk group deport.


Binary / Version:
vxconfig.dll / 5.1.20036.87
vxconfig.dll / 5.1.20026.87

-------------------------------------------------------+

[25] Hotfix name: Hotfix_5_1_20039_87_2530236

Symptom:
This hotfix addresses an issue where the Resource Hosting Subsystem (RHS) process crashes when the system is restarted.

Description:
This issue occurs when VVR is configured in a Microsoft Failover Clustering environment.
RHS.exe reports a crash for mscsrvgresource.dll while restarting the system.

The System Event log may display the following:
ERROR	    1230(0x000004ce)	Microsoft-Windows-FailoverClustering Cluster resource '<resourcename>' (resource type '', DLL 'mscsrvgresource.dll') either crashed or deadlocked. The Resource Hosting Subsystem (RHS) process will now 
attempt to terminate, and the resource will be marked to run in a separate monitor.

The cluster log may display the following:
ERR   [RHS]: caught exception c0000005 in call OPENRESOURCE for <resourcename>.

This issue occurred because of an incorrect Microsoft API call made by a VVR component.

Resolution:
The VVR component now uses a different API to address the crash.


Binary / Version:
mscsrvgresource.dll / 5.1.20039.87

-------------------------------------------------------+

[26] Hotfix name: Hotfix_5_1_20041_87_2604814

Symptom:
This hotfix addresses an issue where the Volume Manager Diskgroup (VMDg) resource in a Microsoft cluster Server (MSCS) cluster may fault when you add or remove multiple empty disks (typically 3 or more) from a dynamic disk group.

Description:
This issue occurs on 64-bit systems where SFW is used to manage storage in a Microsoft Cluster Server (MSCS) environment.
When adding or removing disks from a dynamic disk group, the disk group resource (VMDg) faults and the cluster group fails over. The VEA console shows the disk group in a deported state.

The cluster log displays the following message:
ERR   [RES] Volume Manager Disk Group <resourcename>: LDM_RESLooksAlive: *** FAILED for <resourcename>, status =
0, res = 0, dg_state = 35

The system event log contains the following message:
ERROR 1069 (0x0000042d)	clussvc	<systemname>	Cluster resource <resourcename> in Resource Group <groupname> failed.

MSCS uses the Is Alive / Looks Alive polling intervals to check the availability of the storage resources. While disks are being added or removed, SFW holds a lock on the disk group until the operation is complete. However, if the Is Alive / Looks Alive query arrives at a time the disk add/remove operation is in progress, SFW ignores the lock it holds on the disk group and incorrectly communicates the loss of the disk group (where the disks are being added/removed) to MSCS. As a result, MSCS faults the storage resource and initiates a fail over of the group.


Resolution:
The issue is fixed in the SFW component that holds the lock on the disk group. As a result, SFW now responds to the MSCS Is Alive / Looks Alive query only after the disk add/remove operation is complete.


Binary / Version:
cluscmd64.dll / 5.1.20041.87

-------------------------------------------------------+

[27] Hotfix name: Hotfix_5_1_20045_87_2587638

Symptom:
This hotfix addresses the following issues:

Issue 1
If a disk group contains a large number of disks then it causes a delay in the disk reservation resulting in the defender node losing the disks to the challenger node.

Issue 2
VVR Primary goes into a hang state while initiating a connection request to the Secondary. This is observed when the Secondary machine returns an error message which the Primary is unable to understand.

Issue 3
Reboot of an EMC CLARiiON storage processor results in Storage Foundation for Windows dynamic disks to be flagged as removed resulting I/O failures.

Issue 4
I/O operations hang in the SFW driver, vxio, where VVR is configured for replication.

Issue 5
VVR causes a fatal system error and results in a system crash (bug check).

Note:
Fix for issue #1, #2, and #3 was released earlier as Hotfix_5_1_20040_87_2570602 and fix for issue #5 was released earlier as Hotfix_5_1_20043_87_2535885. They are now a part of this hotfix.

Description:
Issue 1
In a split brain scenario, the active node (defender node) and the passive nodes (challenger nodes) try to gain control over the majority of the disks. 

If the disk group contains a large number of disks, then it takes a considerable amount of time for the defender node to the reserve the disks.

The delay may result in the defender node losing the reservation to the challenger nodes.

This delay occurs because the SCSI-3 disk reservation algorithm performs the disk reservation operation in a serial order.

Issue 2
VVR Primary initiates a connection request to the Secondary and waits for an acknowledgement. The Secondary may reply back with an ENXIO signaling that the Secondary Replicated Volume Group's (RVG's) SRL was not found. The Primary machine is unable to understand this error message and continues to remain in the same waiting state for an acknowledgement reply from the Secondary. 

Any transaction from VOLD is blocked since the RLINK is still waiting for an acknowledgement from the Secondary. This leads to new I/Os getting piled up in the vxio as a transaction is already in progress.

Issue 3
In a multipath configuration, each system has one or more storage paths from multiple storage processors (SP). If a storage processor is rebooted, vxio.sys, the Storage Foundation for Windows (SFW) component) may sometimes mark the disks as removed and fail the I/O transactions even though the disks are accessible from another SP. As the I/O failed, the clustering solution fails over the disks to the passive node.

The following errors are reported in the Windows Event logs: 

INFORMATION Systemname vxio: <Hard Disk> read error at block 5539296 due to disk removal  

INFORMATION Systemname vxio: Disk driver returned error c00000a3 when vxio tried to read block 257352 on <Hard Disk>  

WARNING     Systemname vxio: Disk <Hard Disk> block 9359832 (mountpoint X:): Uncorrectable write error 

Issue 4
I/O operations on a system appear to hang due to the SFW driver, vxio.sys, in an environment where VVR is configured.

When an application performs an I/O operation on volumes configured for replication, VVR writes the I/O operation 
to the SRL log volume and completes the I/O request packet (IRP). The I/O is written asynchronously to the data volumes. 

If the application initiates another I/O operation whose extents overlap with any of the I/O operations that are queued 
to be written to the data volumes, the new I/O is kept pending until the queued I/O requests are complete.

Upon completion, the queued I/O signals the waiting I/O to proceed. However, in certain cases, due to a race condition,
the queued I/O fails to send the proceed signal to the waiting I/O. The waiting I/O therefore remains in the waiting state forever.

Issue 5
VVR may sometimes cause a bug check on a system.

The crash dump file contains the following information:
ATTEMPTED_SWITCH_FROM_DPC (b8)
A wait operation, attach process, or yield was attempted from a DPC routine.
This is an illegal operation and the stack track will lead to the offending code and original DPC routine.

The SFW component, vxio, processes internally generated I/O operations in the disk I/O completion routine itself.
However, the disk I/O completion routine runs at the DPC/dispatch level. Therefore any function in vxio that requires a context switch do not get processed.
The broken function calls result in vxio causing a system crash.

Resolution:
Issue 1
The SCSI-3 reservation algorithm has been enhanced to address this issue.

The algorithm now tries to reserve the disks in parallel thus improving the throughput of the total time required for SCSI reservation on the defender node.

Issue 2
If the Secondary is unable to find the SRL, return a proper error message which the Primary understands. The Primary will set the SRL header error on the RLINK and pause it.

Issue 3
The device removal handling logic has been enhanced to address this issue. The updated SFW component vxio.sys includes the fix.

Issue 4
The race condition in the vxio driver has been fixed.

Issue 5
The vxio component is enhanced to ensure that the internally generated I/O operations are not processed as part of the DPC level disk I/O completion routine.


Binary / Version:
vxio.sys / 5.1.20045.87

-------------------------------------------------------+

[28] Hotfix name: Hotfix_5_1_20042_87_2614448

Symptom:
This hotfix addresses an issue where Windows displays the format volume dialog box when you use SFW to create volumes.

Description:
While creating volumes using the New Volume Wizard from the Veritas Enterprise Administrator (VEA), if you choose to assign a drive letter or mount the volume as an NTFS folder, Windows displays the format volume dialog box.

This issue occurs because as part of volume creation, SFW creates raw volumes, assigns mount points and then proceeds with formatting the volumes.
Mounting raw volumes explicitly causes Windows to invoke the format dialog box.

Note that the volume creation is successful and you can cancel the Windows dialog box and access the volume.

Resolution:
SFW now assigns drive letters or mount paths only after the volume format task is completed.


Binary / Version:
vxvm.dll / 5.1.20042.87

-------------------------------------------------------+


[29] Hotfix name: Hotfix_5_1_20044_87_2610786

Symptom:
This hotfix addresses an issue where the disk group import operation fails to complete if one or more disks in the disk group are not readable.

Description:
As part of the disk group import operation, the SFW component, vxconfig, performs a disk scan to update the disk group configuration information.


It acquires a lock on the disks in order to read the disk properties. 
In case one or more disks in the disk group are not readable, the scan operation returns without releasing the lock on the disks. 
This lock blocks the disk group import operation.

Resolution:
This issue has been addressed in the vxconfig component. The disk scan operation now releases the lock on the disks even if one or more disks are not readable.


Binary / Version:
vxconfig.dll / 5.1.20044.87

-------------------------------------------------------+

[30] Hotfix name: Hotfix_5_1_20047_87_2635097

Symptom:
This hotfix addresses an issue related to Veritas Volume Replicator (VVR) where replication hangs and the VEA Console becomes unresponsive if the replication is configured over the TCP protocol and VVR compression feature is enabled.

Description:
This issue may occur when replication is configured over TCP and VVR compression is enabled.

VVR stores the incoming write requests from the primary system in a dedicated memory pool, NMCOM, on the secondary system.

When the NMCOM memory is exhausted, 
VVR tries to process an incoming I/O request until the time it gets the required memory from the NMCOM pool on the secondary.

Sometimes the NMCOM memory pool may get filled up with out of sequence I/O packets. As a result, the waiting I/O request fails to acquire the memory it needs and goes into an infinite loop.

VVR cannot process the out of sequence packets until the waiting I/O request is executed. The waiting I/O request cannot get the memory as it is occupied by the out of sequence I/O packets. This results in a logjam.

The primary may initiate a disconnect if it fails to receive an acknowledgement from the secondary. But the waiting I/O request is into an infinite loop and hence the disconnect also goes into a waiting state.

In such a case, if a transaction is initiated it will not succeed and will also stall all the new incoming I/O threads, resulting in a server hang.

Resolution:
This issue is fixed in VVR.
The error condition has been resolved by exiting from the loop if an RLINK disconnect is initiated.

Binary / Version:
vxio.sys / 5.1.20047.87
vvr.dll / 5.1.20047.87

-------------------------------------------------------+

[31] Hotfix name: Hotfix_5_1_20046_87_2643293

Symptom:
This hotfix provides Thin Provisioning Reclaim support for Fujitsu ETERNUS DX80 S2/DX90 S2 arrays.

Description:
Added Thin Provisioning Reclaim support for Fujitsu ETERNUS DX80 S2/DX90 S2 array on SFW 5.1 SP2.

Resolution:
Made changes in DDL provider to claim Fujitsu ETERNUS DX80 S2/DX90 S2 array LUN as a Thin Reclaim device.


Binary / Version:
ddlprov.dll / 5.1.20046.87

-------------------------------------------------------+

[32] Hotfix name: Hotfix_5_1_20048_87_2670150

Symptom:
This hotfix addresses the issue where multiple entries for the warning ERROR_MORE_DATA(234) get logged for the control code CLUSCTL_RESOURCE_STORAGE_GET_DISK_INFO.

Description:
In a few cases, while performing the Move Group operation in a Microsoft Clustering environment, customers have seen that multiple entries for the warning ERROR_MORE_DATA(234) gets logged for the control code CLUSCTL_RESOURCE_STORAGE_GET_DISK_INFO.
These warning messages do not affect the functionality of the Volume Manager Disk Group (VMDg) resources and, therefore, can be ignored.

Resolution:
Log entry for the ERROR_MORE_DATA(234) warning for the control code CLUSCTL_RESOURCE_STORAGE_GET_DISK_INFO has been removed from the cluster log.

Note: 
This hotfix is applicable to Windows Server 2003 only.


Binary / Version:
vxres.dll / 5.1.20048.87

-------------------------------------------------------+

[33] Hotfix name: Hotfix_5_1_20049_87_2676164

Symptom:
This hotfix addresses an issue related to the SFW component, vxres.dll, that causes a crash in the Resource Hosting Subsystem (RHS) process.

Description:
This issue occurs when there is a failure in the cluster connection calls made by SFW.
The failed connection causes a crash in the RHS.exe process due to an exception in vxres.dll.

Resolution:
RHS process was crashing because an uninitialized variable was passed during the cluster connection calls.
The component has been updated to address the issue.

Binary / Version:
vxres.dll / 5.1.20049.87

-------------------------------------------------------+


[34] Hotfix name: Hotfix_5_1_20052_87_2711856 

Symptom:
VVR replication between two sites fails if VxSAS service is configured with a local user account that is a member of the local administrators group.

Description:
While configuring VVR replication between the Primary and Secondary sites in Windows Server 2008, the replication fails with the following error:
Permission denied for executing this command. Please verify the VxSAS service is running in proper account on all hosts in RDS.

This happens if the user used for the VxSAS service is a member of the administrators group and the User Access Control (UAC) is enabled.


Resolution:
The vras.dll file has been modified to resolve the issue, wherein it will check the users in the administrators group to know if the specified user 
has the administrative permissions.


Binary / Version:
vras.dll / 5.1.20052.87

-------------------------------------------------------+

[35] Hotfix name: Hotfix_5_1_20002_2327428

Symptom: 
This hotfix address an issue with the product installer component that causes a failure in the SFW Thin Provisioning space reclamation.

Description:
This issue occurs after you add or remove SFW features or repair the product installation from Windows Add/Remove Programs.

The SFW Thin Provisioning space reclamation begins to fail after rebooting the systems. This issue occurs because the product installation component erroneously modifies a vxio service registry key.

Resolution:
This issue has been fixed in the product installation component.
The updated component no longer modifies the registry key.

Note:

The hotfix installation steps vary depending on the following cases:
- Product installed but issue did not occur
- Product installed and issue occurred

Perform the steps depending on the case.

Case A: Product installed but issue did not occur
If you have installed SFW or SFW HA in your environment but have not encountered this issue as yet. This could be because you may not have added SFW features or run a product Repair from Windows Add/Remove Programs.

Perform the following steps:
1. Install this hotfix on all systems. See "To install the hotfix using the GUI" or "To install the hotfix using the command line" section in this readme.

2. After replacing the file on all the systems, you can add or remove SFW features or run a product repair on the systems.


Case B: Product installed and issue occurred
If you have installed the product and the issue has occurred on the systems.

Perform the following steps:
1. Install this hotfix on all systems. See "To install the hotfix using the GUI" or "To install the hotfix using the command line" section in this readme. 

2. After installing the hotfix, perform one of the following:
   - From the Windows Add/Remove Programs, either add or remove an SFW feature or run a product Repair on the system. 
   - Set the registry key HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\vxio\Tag to 8 in decimal. 

3. Select the desired options on the product installer and complete the workflow.
Reboot the system when prompted.

4. Repeat these steps on all the systems where this issue has occurred.


Binary / Version:
VM.dll / 5.1.20002.267  

-------------------------------------------------------+

[36] Hotfix name: Hotfix_5_1_20056_87_2738430

Symptom:

This hotfix addresses the following issues:

Issue 1
An active cluster dynamic disk group faults after a clear SCSI reservation operation is performed.

Issue 2
VVR replication never resumes when the Replicator Log is full and DCM gets activated for the Secondary host resynchronization.

Issue 3
Snapback caused too many block to resynchronize for snapback volume.

Note:
Fixes for issues #1 and #2 were released earlier as Hotfix_5_1_20055_87_2740872. They are now a part of this hotfix.

Description:
Issue 1
This error occurs when a clear SCSI reservation operation, such as bus reset or SCSI-3 clear reservation, is performed on an 
active cluster dynamic disk group. During the operation, the cluster dynamic disk group faults.

An error message similar to the following is logged in the Event Viewer:
Cluster or private disk group has lost access to a majority of its disks. Its reservation thread has been stopped.

Issue 2
This issue occurs while performing a VVR replication. During replication, if the Storage Replicator Log (SRL) becomes full, then Data Change Map (DCM) gets activated for the Secondary host resynchronization. Because the resynchronization is a time-consuming process, VVR stops sending blocks and the replication never resumes even though it is active and shows the status as "connected autosync resync_started".

Issue 3
During a snapback operation, SFW determines the changed blocks to be synced using a per plex bitmap and a global accumulator.
It updated the accumulator with the per plex bitmap for the changed blocks and syncs all the changed blocks from the original volume to the snapback volume.

Now if there is an another volume to be snapback, it again updates the accumulator with the per plex bitmap. 
But now the accumulator still have the older entries of the previous snapback operation and it copies extra blocks on the new snapback volume.

For e.g. Consider the following scenario :-
1. Create two snapshot of Volume F: (G: & H:)
2. make G: writable and copy files to G:
3. snapback G: using data from F:
4. snapback H: using data from F:

Step 4 should have nothing to resynchronize since there is no change to F: and H: above.
However, vxio trace shows that the number of dirty regions to be synchronized is the same as step 3.
This is because the global accumulator is updated at step 3 and this causes extra blocks to be resynced.

Resolution:
Issue 1
To resolve this issue, SFW retries the SCSI reservation operation.

Issue 2
The vxio.sys file has been modified to resolve this issue.

Issue 3
During the snapback operation updating the per-plex map to correctly reflect the changed blocks to be synced for the snapback operation.

Binary / Version:
vxio.sys / 5.1.20055.87
vxconfig.dll / 5.1.20036.87

-------------------------------------------------------+

[37] Hotfix name: Hotfix_5_1_20058_88_2766206 

Symptom:
Issue 1
Some VVR operations fail if the RVG contains a large number of volumes.

Issue 2
Storage Agent crashes and causes SFW operations to stop responding in a 
VVR configuration.

Issue 3
Some VVR operations fail if VxSAS is configured with a domain user account.

Description:
Issue 1
This issue occurs while performing certain VVR operations, such as creating or deleting a Secondary, in a Replicated Volume Group (RVG) with a large number of volumes (more than 16 volumes). In such cases, if the combined length of all volume names under the RVG is greater than 512 bytes, then some VVR operations fail.

Issue 2
On a Veritas Storage Foundation for Windows (SFW) computer 
configured with Veritas Volume Replicator (VVR), the Storage Agent (vxvm 
service) crashes and causes the SFW operations to stop responding. This happens 
because of memory corruption in Microsoft's Standard Template Library (STL), 
which is used by a code in SFW. 
For more 
information on STL memory corruption, see 
http://support.microsoft.com/kb/813810.



The following messages are logged in the Event Viewer:

The Service Control Manager tried to take a corrective action (Restart the 
service) after the unexpected termination of the Veritas Storage Agent service, 
but this action failed with the following error: %%1056
- The Veritas Storage Agent service terminated unexpectedly.  It has done this 1 
time(s).  

The following corrective action will be taken in 60000 milliseconds:

Restart the service. 

Issue 3
This issue occurs while performing certain VVR operations, such as adding a Secondary 
host to an already configured VVR RDS. If Veritas Volume Replicator Security Service (VxSAS) is 
configured with a domain user account, then some VVR operations fail because VVR cannot 
authenticate the user credentials.

Resolution:
Issue 1
This issue has been resolved by modifying the code to handle larger number of volumes (up to 215 volumes).

Issue 2
To resolve this issue, the SFW code has been modified to not use 
STL in frequently-exercised VVR modules. 

Issue 3
This issue has been resolved by using fully qualified domain name for authenticating 
the user credentials.

Binary / Version:
vras.dll / 5.1.20058.88
vvr.dll / 5.1.20058.88

-------------------------------------------------------+

[38] Hotfix name: Hotfix_5_1_20059_87_2834385b

Symptom:

This hotfix addresses the following issues:

Issue 1
Performing import/deport disk group in a cluster environment fails due to VxVDS refresh operation.

Issue 2
This hotfix blocks Thin Provisioning Reclaim support for a snapshot volume and for a volume that has one or more snapshots.

Issue 3
This hotfix provides Thin Provisioning Reclaim support for Huawei S5600T arrays.

Issue 4
The VDS Dynamic software provider causes an error in VDS.

Issue 5
The Dynamic Disk Group Split and Join operations takes a long time to complete.

Note:
Fixes for issues #1 and #2 were released earlier as Hotfix_5_1_20034_87_2400260. Fix for issue #3 was released as Hotfix_5_1_20051_87_2683797. These fixes are now a part of this hotfix.

Description:
Issue 1
VxVDS refresh operation interferes with disk group import/deport operation resulting in timeout and delay. This happens as both refresh and import/deport disk group processes try to read disk information at the same time.

Issue 2
It is observed that if a volume or its snapshot is reclaimed, then performing the snapback operation on such a volume causes data corruption.

Issue 3
Added Thin Provisioning Reclaim support for Huawei S5600T array on SFW 5.1 SP2.

Issue 4
This issue occurs while performing the DG DEPORT operation. This happens because the DG DEPORT alerts were not handled successfully by VxVDS. The following error message is displayed: Unexpected provider failure. Restarting the service may fix the problem.

Issue 5
The hotfix addresses an issue where the VDS Refresh operation overlaps with the Dynamic Disk Group Split and Join (DGSJ) operations, which causes a latency. This issue is observed after upgrading SFW.

Resolution:
Issue 1
To abort the refresh operation if disk group import/deport operation is in progress. Instead of doing full refresh on all the disk groups, refresh operation will now be performed only on disk groups that are being imported/deported.

Issue 2
Blocked Thin Provisioning Reclaim operation on snapshot volume and on volume that has one or more snapshots.

Issue 3
Made changes in DDL provider to claim Huawei S5600T array LUN as a Thin Reclaim device.

Issue 4
The hotfix fixes the DG DEPORT component, which now handles all alerts successfully.

Issue 5
This hotfix fixes the binaries that cause the latency.


Binary / Version: 
vxvds.exe / 5.1.20059.87 
vxvm.dll / 5.1.20059.87 
vxvm_msgs.dll / 5.1.20059.87

-------------------------------------------------------+

[39] Hotfix name: Hotfix_5_1_20060_87_2851054

Symptom:
Memory leak in Veritas Storage Agent service (vxpal.exe)

Description:
This hotfix addresses a memory leak issue in Veritas Storage Agent service (vxpal.exe) when the mount point information is requested from either the MSCS or VCS cluster.

Resolution: 
This hotfix fixes a binary that causes a memory leak in the Veritas Storage Agent service (vxpal.exe). 

Binary / Version:
mount.dll / 5.1.20060.87

-------------------------------------------------------+

[40] Hotfix name: Hotfix_5_1_20062_87_2864040 

Symptom:
The vxprint CLI crashes when used with '-l' option

Description:
This hotfix addresses an issue where the vxprint CLI crashes when used with the '-l' option. The issue occurs when the read policy on a mirrored volume is set to a preferred plex.

Resolution: 
The hotfix fixes the binary that caused vxprint CLI to crash.

Binary / Version:
vxprint.exe / 5.1.20062.87 

-------------------------------------------------------+

[41] Hotfix name: Hotfix_5_1_20063_87_2894296

Symptom:
An VMDg resource on MSFC fails to get the correct MountVolumeInfo value

Description:
This hotfix addresses an issue where the MountVolumeInfo property of VMDg resource does not populate correctly. This occurs because the VMDg resource contains a raw volume.

Resolution: 
The hotfix fixes the binary that caused the incorrect population of the VMDG resource.

Binary / Version:
vxres.dll / 5.1.20063.87

-------------------------------------------------------+

[42] Hotfix name: Hotfix_5_1_20064_87_2905123  

Symptom:
Not able to create volumes; VEA very slow for refresh and rescan operations

Description:
In SFW, this issue occurs while creating a volume or performing refresh or rescan operations from VEA. During this, if the system has several disks with OEM partitions, volume creation fails and VEA takes a very long time to perform the refresh and rescan operations. Because FtDisk provider locks the database for a long time while processing OEM partitions, lock contention occurs in other operations and providers, which eventually makes all the operations slow.

Resolution: 
This issue has been resolved by optimizing the way FtDisk provider acquires lock and releases it.

Binary / Version:
ftdisk.dll / 5.1.20064.87

-------------------------------------------------------+

[43] Hotfix name: Hotfix_5_1_20066_87_2914038

Symptom:
Storage Agent crashes on startup

Description:
This hotfix addresses an issue where Storage Agent crahes during startup because of an unhandled exception while accessing an invalid pointer.

Resolution: 
This issue has been resolved by handling the exception in the code.

Binary / Version:
vdsprov.dll / 5.1.20066.87

-------------------------------------------------------+

[44] Hotfix name: Hotfix_5_1_20067_87_2911830

Symptom:
Not able to create a volume with more than 256 disks

Description:
This issue occurs while creating a volume with more than 256 disks. This happens because the license provider check limits the maximum number of disks allowed per volume to 256, regardless of the volume layout. Note that this limit is imposed to control the maximum number of records (such as plex, subdisk, disks, volume, RVG) that gets stored on private region per disk group (maximum limit for this is 2922 records).

Resolution: 
This issue has been resolved by increasing the limit of the maximum number of allowed disks per volume to 512. As long as the maximum number of records per private region do not crossing its limit, the new limit of disks per volume is valid.

Binary / Version:
sysprov.dll / 5.1.20067.87

-------------------------------------------------------+

[45] Hotfix name: Hotfix_5_1_20069_88_2928801

Symptom:
The vxtune rlink_rdbklimit command does not work as expected

Description:
This issue occurs when using the vxtune rlink_rdbklimit command to set a value for the RLINK_READBACK_LIMIT tunable. The command fails because vxtune.exe incorrectly stores an invalid value for RLINK_READBACK_LIMIT instead of the one provided by the user. This happens because the value is internally converted into kilobytes instead of bytes.

Resolution: 
This issue has been resolved by correcting the code that does the kilobyte to byte conversion.

Binary / Version:
vxtune.exe / 5.1.20069.88

-------------------------------------------------------+

[46] Hotfix name: Hotfix_3_3_1071_2860593 

Symptom:
The vxprint command may fail when it is run multiple times

Description:
The CORBA clients, such as vxprint, use a CSF (Common Services Framework) API called CsfRegisterEvent to register for events with the VxSVC service. This issue occurs when you run the vxprint command multiple times and the CsfRegisterEvent API fails. However, the issue is intermittent and may not always happen. The vxprint command fails with the following error: V-107-58644-930

Resolution: 
This issue can be resolved by retrying the underlying function, which the CsfRegisterEvent API calls in csfsupport3.dll.

Binary / Version:
csfsupport3.dll / 3.3.1071.0

-------------------------------------------------------+

[47] Hotfix name: Hotfix_5_1_20068_87_2913240

Symptom:
This hotfix addresses the following issues:
Issue 1
MountV resource faults because SFW removes a volume due to delayed device removal request.

Issue 2
Expanding volume using "mirror across disks by Enclosure" assigns disks to wrong plexes.

Issue 3
Basic quorum resource (physical disk resource) faults while Disk Group resource tries to get DgID.

Description:
Issue 1
This issue occurs when SFW removes a volume in response to a delayed device removal request. Because of this, the VCS MountV resource faults.

Issue 2
This issue occurs when expanding a volume using the "mirror across disks by Enclosure" option. During this, the Expand Volume command assigns disks to the wrong plexes, splitting the plexes across enclosures. This happens if the arrays have long and similar names except for the last few numbers; for example, EMC000292602920 and EMC000292602853. The function to compare names does not handle such names in the strings correctly and, therefore, treats different arrays as the same.

Issue 3
When the Volume Manager Disk Group resource tries to get the dynamic disk group ID (DgID) information, SFW clears the reservation of all the disks, including those that are part of Microsoft Cluster Server (MSCS). However, Microsoft Failover Cluster (MSFC) disk resources do not try to re-reserve the disks and, therefore, they fault.

Resolution
Issue 1
This issue has been resolved by not disabling the mount manager interface instance if it is active when the device removal request arrives.

Issue 2
This issue has been resolved by modifying the name comparison function. 

Issue 3
This issue has been resolved. Now, while clearing the disk reservations, SFW skips the offline and basic disks.

Binary / Version:
vxio.sys / 5.1.20068.87 
vxconfig.dll / 5.1.20068.87

-------------------------------------------------------+

Known issues
============|

The following section describes the issues related to the individual hotfixes that are included in this CP.

[1] Hotfix_5_1_20005_87_2218963
The following issues may occur:

- Changing the drive letter of a volume when a reclaim task for that volume is in progress will abort the reclaim task. The reclaim task will appear to have completed successfully, but not all of the unused storage will be reclaimed.

Workaround:
If this happens, perform another reclaim operation on the volume to release the rest of the unused storage.

- Reclaim operations on a striped volume that resides on thin provisioned disks in HP XP arrays may not reclaim as much space as you expect. Reclaiming is done in contiguous allocation units inside each stripe unit. The allocation unit size for XP arrays is large compared to a volume's stripe unit size, so free allocation units are often split across stripe units. In that case they are not contiguous and cannot be reclaimed.

- If you use the SFW installer to change the enabled feature set after SFW is already installed, reclaiming free space from a thin provisioned disk no longer works. The installer incorrectly changes the Tag variable in the vxio service registry key from 8 to 12. That allows LDM to intercept and fail the reclaim requests SFW sends to the disks. This is a problem only on Windows Server 2008.

Workaround:
To work around this problem, manually change the value of HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\vxio\Tag back to 8 and reboot after changing the enabled SFW features on Windows Server 2008.

[2] Hotfix_5_1_20024_87_2368399
This issue is applicable only if you uninstall this hotfix.
If you perform the volume shrink operation from the VEA Console after removing this hotfix, VEA still displays the warning message. This occurs because the stale files residing in the VEA cache are not removed during the uninstallation.

Workaround:
Perform the following steps after you have removed the hotfix.

1. Close the Veritas Enterprise Administrator (VEA) Console.
2. Stop the Veritas Storage Agent service.
Type the following at the command prompt:
net stop vxvm
3. Delete the Client extensions cache directories from the system:
    On Windows Server 2003, delete the following:
    - %allusersprofile%\Application Data\Veritas\VRTSbus\cedownloads
    - %allusersprofile%\Application Data\Veritas\VRTSbus\Temp\extensions

    On Windows Server 2008, delete the following:
    - %allusersprofile%\Veritas\VRTSbus\Temp\extensions
    - %allusersprofile%\Veritas\VRTSbus\cedownloads

4. Start the Veritas Storage Agent service.
Type the following at the command prompt:
    net start vxvm
5. Repeat steps 1 to 4 on all the systems where you have uninstalled this hotfix.
6. Launch VEA to perform the volume shrink operation.


[3] Hotfix_5_1_20059_87_2834385b

SFW DSM for Huawei does not work as expected for Huawei S5600T Thin Provisioning (TP) LUNs. 
In case the active paths are disabled, the I/O fails over to the standby paths. When the active paths are restored, the I/O should fail back to the active paths. But in case of Huawei S5600T TP LUNs, the I/O runs on both active as well as standby paths even after the active paths are restored. This issue occurs because Huawei S5600T TP LUN does not support A/A-A explicit trespass.

The SFW DSM for Huawei functions properly for Huawei S5600T non-TP LUNs.

Workaround: 

To turn off the A/A-A explicit trespass, run the following commands from the command line:
   Vxdmpadm setdsmalua explicit=0 harddisk5
   Vxdmpadm setarrayalua explicit=0 harddisk5


Additional notes
================|

[+] To confirm the list of cumulative patches installed on a system, run the following command from the directory where the CP files are extracted:
vxhfbatchinstaller.exe /list

The output of this command displays a list of cumulative patches and the hotfixes that are installed as part of a CP. This command also displays the hotfixes that are included in a CP but are not installed on the system.

[+] To confirm the installation of the hotfixes, run the following command:
vxhf.exe /list

The output of this command lists the hotfixes installed on the system.

[+] For details about a particular hotfix, run the following command:
vxhf.exe /display:<HotfixName>

Here, <HotfixName> is the name of the hotfix file without the platform and the .exe extension.

[+] The CP installer (vxhfbatchinstaller.exe) creates and stores logs at:
"%allusersprofile%\Application Data\Veritas\VxHF\VxHFBatchInstaller.txt"

[+] The hotfix installer (vxhf.exe) creates and stores logs at:
"%allusersprofile%\Application Data\Veritas\VxHF\VxHF.txt"

[+] For general information about the hotfix installer (vxhf.exe), please refer to the following technote:
http://www.symantec.com/business/support/index?page=content&id=TECH73446

[+] To view a list of hotfixes already installed on a system, please refer to the steps mentioned in the following technote:
http://www.symantec.com/business/support/index?page=content&id=TECH73438

[+] For information on uninstalling a hotfix, please refer to the steps mentioned in the following technote:
http://www.symantec.com/business/support/index?page=content&id=TECH73443


Disclaimer
==========|

This fix is provided without warranty of any kind including the warranties of title or implied warranties of merchantability, fitness for a particular purpose and non-infringement. Symantec disclaims all liability relating to or arising out of this fix. It is recommended that the fix be evaluated in a test environment before implementing it in your production environment. When the fix is incorporated into a Storage Foundation for Windows maintenance release, the resulting Hotfix or Service Pack must be installed as soon as possible. Symantec Technical Services will notify you when the maintenance release (Hotfix or Service Pack) is available if you sign up for notifications from the Symantec support site http://www.symantec.com/business/support and/or from Symantec Operations Readiness Tools (SORT) http://sort.symantec.com.



Feedback
Click here to rate this page
Read and accept Terms of Service