BCV copies can be used for backup, restore, decision support, and application testing., BCV devices contain no data after initial Symmetrix configuration. The full establish operation must be used the first time the standard devices are paired with their BCV devices.

1.  Associate the BCV Device for pairing:

To perform standard/BCV pairing, the standard and BCV mirror devices of your production images must be members of the same device group (Note:- Already discussed in previous post about creating device group and pairing devices). 


To associate BCV001 with device 0ABC,enter:

  symbcv –g DgName –sid SymmID associate dev 0ABC BCV001


Or to associate a range of devices to a device group, enter:


 symbcv –g DgName –sid SymmID associateall dev –RANGE 0ABC:0DEF


Note:- -sid SymmID is optional if you have already defined device group in symcli environment varaiable.


2. Unmount the BCV device:


Prior to using devices for BCV operations, the BCV device should be Windows formatted and assigned a drive letter.


If using basic disks on the Windows platform, you must unmount the BCV devices. If using dynamic disks, you must deport the entire TimeFinder device group.  For basic disks, use the syminq command to determine the SymDevName of the potential BCV device. For dynamic disks,use the TimeFinder  symntctl command to determine the volume and disk group name as follows:


symntctl list –volume [dg DgName]


Note that the term device group and dynamic disk group are the same applied to this command.


Unmount the selected BCV device as follows (with TimeFinder command):


symntctl unmount –drive z


Where z equals the designated drive letter.  If an error occurs, check for an “openhandle” and clear this condition.


For Veritas dynamic disks only, you must deport the disk group and rescan using the following commands:


vxdg deport –g DgName


symntctl rescan


3. Fully Establish BCV and STD:


To obtain a copy of the data on a standard device, the BCV device of the pair must be established.


To initiate a full establish on a specific standard/BCV device pair, target the standard device:


symmir –g DgName –full establish DEV001


Fully Establish all pairs in a group. To initiate a full establish on all BCV pairs in a device group, enter:


symmir –g DgName –full establish


Verify the completed (synchronized) establish operation. To verify when the BCV pairs reach the full copied or Synchronized state, use the verify action as follows:


symmir –g DgName –i 20 verify


With this interval and count, the message is displayed every 20 seconds until the pair is established.


Rescan the drive connections. For Dynamic disks only on a Windows host, you should rescan for drive connections visible to the host:


symntctl rescan


After any standard/BCV pair has been fully established and subsequently split, to save establish (resync) time, you can perform an establish operation omitting the -full option, which updates the BCV copy with only the changed tracks that occurred on the standard device during the elapsed BCV split time. To perform an incremental establish, omit the –full option, targeting the standard device
of the pair:


symmir -g DgName establish DEV001


Optionally, you can also collectively target all devices in a device group, composite group, or defined devices in a device file:


  symmir –g DgName establish [-full]


 symmir –g CgName establish [-full]


  symmir –file FileName establish [-full]


4. Prepare (freeze) Production database for a TimeFinder Split:


To prepare to split the synchronized BCV device from the production standard device, you must suspend I/O at the application layer or unmount the production standard prior to executing the split operation.


symioctl freeze –type DbType [object]


Ensure any residual cache on the Production host is fully flushed to disk. To insure all pending unwritten production file system entries are captured, enter TimeFinder command:


symntctl flush –drive z


Wait 30 to 60 seconds for the flush operation to complete.   


5. Split the BCV devices:


To split all the BCV devices from the standard devices in the production device group, enter:


symmir –g DgName split


To split a specific standard/BCV pair, target the logical device name in the group, enter:


symmir –g DgName split DEV001


6. Verify the split operation completes:


To verify when the BCV device is completely split from the standard, use the verify action as follows:


symmir –g DgName –i 20 verify –split -bg


With this interval and count, the message is displayed every 20 seconds until the pair is split.


7. Rescan for dynamic disks:


For dynamic disks only, you should rescan for drive connections visible to the host:


symntctl rescan


8. Mounting BCV device:


After splitting the BCV device, you can mount the device with captured data on another host and reassign the drive letter.


For basic disks, use the TimeFinder command:


symntctl  mount –drive z –dg DgName


For Dynamic disks, use the TimeFinder command:


 symntctl mount –drive z –vol VolName   –dg DgName | -guid VolGuid


For VERITAS dynamic disk only, you must deport the disk group and rescan, as follows:


vxdg deport –dg DgName


symntctl rescan


For Dynamic disk only (without Veritas), you can use the Microsoft diskpart command to select the disk and import the device using the online and import actions.


Note:- symntctl command available in TimeFinder/IM ( Integration Module).

VMware hosts require few mandatory FA bits setting before SAN storage to be provision. Apart from FA bits a series of procedure require from installing HBAs, HBA Firmware and drivers, zoning, mapping, masking devices, to configure kernel files and devices.

Let’s assume we have already identified Symmetrix FA port for VMware host and completed zoning on switch. It is better to have separate FA pair for VMware host. (You can connect VMware host to 2 pair FA if you have enough FA resources available and going to deploy critical application which require more performance).

You can identify the FA port available on Symmetrix:

symcfg list –connections.


Verify port flag settings-

symcfg list –fa -p -v

( FA-Number and Port where your host connected/zoned)


The following FA bits/flag require being set/Enable:

                     i)    Common Serial Number (C)


                    ii)    VCM State (VCM) --- (ACLX for V-MAX)


                    iii)    SCSI 3 (SC3)


                    iv)    SPC 2


                    v)     Unique World Wide Name (UWWN)


                   vi)     Auto-negotiation (EAN)


                   vii)    Point to Point (P)

Note :- FA bit/flag requirement may vary depending on Symmetrix, but most of times you require to enable above bit for VMware host.

Create a command file for setting FA port flags, call it faflags.cmd with the below entry:

# For C-Bit

set port FA:Port Common_Serial_Number=enable;


set port FA:Port Common_Serial_Number=enable;

# For VCM-Bit

set port FA:Port VCM_State=enable;


set port FA:Port VCM_State=enable;

# For SC3-Bit

set port FA:Port scsi_3=enable;


set port FA:Port scsi_3=enable;

# For SP-2-Bit

set port FA:Port SPC2_Protocol_Version=enable;


set port FA:Port SPC2_Protocol_Version=enable;


# For UWWN-Bit

set port FA:Port Unique_WWN=enable;


set port FA:Port Unique_WWN=enable;

# For EAN-Bit

set port FA:Port Auto_Negotiate=enable;


set port FA:Port Auto_Negotiate=enable;

# For PTOP-Bit

set port FA:Port Init_Point_to_Point=enable;


set port FA:Port Init_Point_to_Point=enable;

Once you prepare command file, you can commit the file:

symconfigure –sid preview –f  faflags.cmd

Verify port flag settings once again, required FA flags should have be enabled by now-

symcfg list –fa -p -v

You are ready to provision SAN storage for VMware host now…

CLARiiON Flare release 29 (04.29.000.5.001) introduce support for several new features as follow:

1) Virtualization-aware Navisphere Manager - Discovery of VMware client always were difficult in earlier release but Flare 29 enables CLARiiON CX4 users and VMware administrators to reduce infrastructure reporting time from hours to minutes. Earlier releases have allowed only a single IP address to be assigned to each iSCSI physical port. With Flare 29, the ability to define multiple virtual iSCSI ports on each physical port has been added along with the ability to tag each virtual port with unique VLAN tag. VLAN tagging has also been added to the single Management Port interface. It should be noted that the IP address and VLAN tag assignments should be carefully coordinated with those supporting the network infrastructure where the storage system will operate

2) Built-in policy-based spin down of idle SATA II drives for CLARiiON CX4 - Lowers power requirements in environments such as test and development in physical and virtual environments. Features include a simple management via a “set it and forget it” policy, complete spin down of inactive drives during times of zero I/O activity, and drives automatically spin back up after a "first I/O" request is received.

3) Virtual Provisioning Phase 2 - Support for MirrorView and SAN Copy replication on thin LUNs has been added.

4) Search feature – Provides users with the ability to search for a wide-variety of objects across their storage systems. Objects can be either logical (e.g., LUN) or physical (e.g., disks).

5) Replication roles - Three additional roles have to be added in Navisphere: “Local Replication Only”, “Replication” and “Replication/ Recovery”.

6) Dedicated VMware software files - VMware software files (i.e. NaviSecCLI, Navisphere Initialization Wizard) are now separate from those of the Linux Operating System.

7) Software filename standardization - all CLARiiON software filenames beginning with FLARE Release 29

8) Changing SP IP addresses - SP IP addresses can now be changed without rebooting the SP. Only the Management Sever will need to be rebooted from the Setup page, which results in no storage system downtime.

9) Linux 64-bit server software – Native 64-bit Linux server software files simplify installation by eliminating the need to gather and load 32-bit DLLs.

10) Solaris x64 Navisphere Host Agent – Release 29 marks the introduction of Solaris 64-bit Navisphere Host Agent software. This Host Agent is backward compatible with older FLARE release.


Lets understand EMC Open Replicator product:- Open Replicator enables remote point-in-time copies with full or incremental copy capabilities to be used for high-speed data mobility, remote vaulting, migration, and distribution between EMC Symmetrix DMX and other qualified storage arrays. OR leverages the high-end Symmetrix storage architecture and offers unmatched deployment flexibility and massive scalability.

EMC Open Replicator is tightly integrated with EMC TimeFinder and SRDF families of local and remote solutions. Open Replicator Functionality:

- Protect lower-tier applications at remote locations.

- Pushing or Pulling data from Symmetrix DMX arrays to other qualified storage arrays in SAN/WAN environments.

- Create remote point-in-time copies of local production volumes for many purposes from data vaulting to remote testing and development.

- Ensure immediate data availability to host applications via Symmetrix DMX consistency technologies.

When to use EMC Open Replicator for migration:


1) SYMAPI cannot validate third party or non-visible storage systems.
2) To protect against potential data loss due to a SAN failure or other connectivity issue during a hot pull operation, use the donor update option. When enabled, this feature causes all writes to the control device from the host to be immediately copied to the remote device as well. Because the data is fully copied to both the remote device and the control devices, if a failure occurs, the session can safely be terminated and created again to fully recover from any mid-copy failure.
3) Open Replicator uses FA resources. If you are using this utility in a production environment, verify with the SA that FA bandwidth assessments have been considered and that appropriate throttling parameters (pace or ceiling).
4) When using BCVs, the BCVs must be “visible” to the remote storage array. Thus, they must be mapped to an FA and the FA must be zoned to the destination storage. We highly recommend that BCVs not be mapped to the same FA as the control standard devices to avoid a negative impact on host I/O performance.
5) If a configuration uses thin device as destination in a pull or push copy operation, full volume allocation of the thin device will be made because Open Replicator creates a full volume copy.
6) When performing an Open Replicator migration, always use the –v qualifier on the create command. This insures that, should the session fail, there will be useful information returned as to what volume caused the error. This allows you to more quickly recognize zoning or masking issues.
7) Issuing create commands prior to Open Replicator migration activities allows confirmation that there will be no zoning or masking issues discovered during the migration window. This technique will only be successful if no changes have been made to the Symmetrix environment between issuance of the create and copy commands.
8) It is better and easier to use an Open Replicator management host for preparing, executing, and monitoring migration sessions than using one of the systems with volumes involved in the activity.
9) For Veritas file systems, PowerPath devices, and Oracle databases, when you activate the copy session devices must be frozen just before the activate is performed, and thawed as soon as the activate completes. Use the following options to with the symrcopy activate command, when applicable:
-vxfs MountPoint
-ppath srcdevs

PowerPath device

-rdb dbtype DbType -db DbName

10) The device specified in the command line must match the device in the device file or the activate will fail.

PowerPath Migrator Enabler is a host-base migration tools from EMC that allows you to migrate data between storage systems with little or no interruption to data access. This tool can be use in conjunction with other underlying technology such as EMC Invista, Open Replicator. PPME use the PowerPath filter dirvers to provide non-disruptive or minimally disruptive migrations. Only specific host plateforms are supported by PPME. Please check EMC support matrix for supported host systems. One of the PPME features that supports pseduo-to-pseduo, native-to-native and native-to-pseudo device migration.

Consider the following when designing and configuring PPME:

Ø Remote devices do not have to be the same RAID type or meta-configuration.
Ø Target devices must be the same size or large than the source control device.
Ø Target directors act as initiators in the SAN.
Ø Contrary to the recommendations for Open Replicator, the source device remains online during the “hot pull.”
Ø The two storage systems involved in the migration must be connected directly or through a switch, and they must be able to communicate.
Ø Every port on the target array that allows access to the target device must also have access to the source device through at least one port on the source array. This can be counter to some established zoning policies.
Ø Since PPME with Open Replicator uses FA resources, determine whether this utility will be used in a production environment. In addition, consider FA bandwidth assessments so that appropriate throttling parameters (that is, pace or ceiling).
Ø The powermig throttle parameter sets the pace of an individual migration by using the pace parameter of Open Replicator:
Ø Faster (lower throttle) makes the migration faster, but may impact application I/O performance.
Ø Slower (higher throttle) makes the migration slower.
Ø The default is five (midpoint).
Ø When setting a ceiling to limit for Open Replicator throughput for a director/port:
Ø The ceiling value is set as a percentage of a director/port’s total capacity.
Ø The ceiling can be set for a given director, port, director and port, or all director and ports in the Symmetrix array.
Ø To set ceiling values, you must use symrcopy set ceiling directly (powermig does not provide a way to do this)
Ø Once the hot pull has completed, remove or re-use the source device.
Ø Do not forget to “clean up” the zoning once you have completed migration activities.
Hope this will be useful in migration planning or selecting migration tools. I will try to explain in deatil in coming post such as PPME with Open Replicator, Solution Enabler etc..

In the current storage markets and technology, storage tiers are defined by availability, functionality, performance and costs. In fact data can move up and down tiers as time and business required.

Tier "0" is not new in storage market but for implementation purposes it has been difficult to accommodate because it requires best performance and lowest latency. Enterprise Flash disks (Solid State Disks) capable to meet this requirement. It is possible to get more performance for company most critical applications. The performance can be gained through using Flash drives supported in VMAX and DMX-4 systems. One Flash drive can deliver IOPS equivalent to 30 15K RPM hard disk drives with approximately 1 ms application response time. Flash memory achieves performance and the lowest latency ever available in the enterprise class storage array.

Tier “0” application can be closely coupled with other storage tier within Symetrix series for consistency and efficiency, reducing cost of company for manual data layout or data migration from old disk to new high speed disk.

Tier “0” storage can be used to accelerate online transaction processing, accelerating performance with large indices and frequently accessed database tables i.e. Oracle, DB2 databases and SAP R/3. Tier 0 can also improve performance in batch processing and shorten batch processing in windows environments.

Tier “0” storage performance will help application that needs the lowest latency and response time possible. The following applications can get benefited through using Tier 0 storage:

- Algorithmic trading
- Data modeling
- Trade optimization
- Realtime data/feed processing
- Contextual web advertising
- Other realtime transaction systems
- Currency exchange and arbitrage

Tier “0” storage is most beneficial with random read miss application. If random read miss percentage is low, application will not see any performance difference since writes and sequential reads/writes already leverage Symmetrix cache to achieve the lowest possible response time.

For example, if the read hit percentage is high >90 % as compared to read misses, such application like DSS, Streaming media, improvements provided by Tier 0 storage will not likely be enough to be cost-effective.

About Me

My photo
Sr. Solutions Architect; Expertise: - Cloud Design & Architect - Data Center Consolidation - DC/Storage Virtualization - Technology Refresh - Data Migration - SAN Refresh - Data Center Architecture More info:- diwakar@emcstorageinfo.com
Blog Disclaimer: “The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.”
EMC Storage Product Knowledge Sharing