PowerPath Migrator Enabler is a host-base migration tools from EMC that allows you to migrate data between storage systems with little or no interruption to data access. This tool can be use in conjunction with other underlying technology such as EMC Invista, Open Replicator. PPME use the PowerPath filter dirvers to provide non-disruptive or minimally disruptive migrations. Only specific host plateforms are supported by PPME. Please check EMC support matrix for supported host systems. One of the PPME features that supports pseduo-to-pseduo, native-to-native and native-to-pseudo device migration.

Consider the following when designing and configuring PPME:

Ø Remote devices do not have to be the same RAID type or meta-configuration.
Ø Target devices must be the same size or large than the source control device.
Ø Target directors act as initiators in the SAN.
Ø Contrary to the recommendations for Open Replicator, the source device remains online during the “hot pull.”
Ø The two storage systems involved in the migration must be connected directly or through a switch, and they must be able to communicate.
Ø Every port on the target array that allows access to the target device must also have access to the source device through at least one port on the source array. This can be counter to some established zoning policies.
Ø Since PPME with Open Replicator uses FA resources, determine whether this utility will be used in a production environment. In addition, consider FA bandwidth assessments so that appropriate throttling parameters (that is, pace or ceiling).
Ø The powermig throttle parameter sets the pace of an individual migration by using the pace parameter of Open Replicator:
Ø Faster (lower throttle) makes the migration faster, but may impact application I/O performance.
Ø Slower (higher throttle) makes the migration slower.
Ø The default is five (midpoint).
Ø When setting a ceiling to limit for Open Replicator throughput for a director/port:
Ø The ceiling value is set as a percentage of a director/port’s total capacity.
Ø The ceiling can be set for a given director, port, director and port, or all director and ports in the Symmetrix array.
Ø To set ceiling values, you must use symrcopy set ceiling directly (powermig does not provide a way to do this)
Ø Once the hot pull has completed, remove or re-use the source device.
Ø Do not forget to “clean up” the zoning once you have completed migration activities.
Hope this will be useful in migration planning or selecting migration tools. I will try to explain in deatil in coming post such as PPME with Open Replicator, Solution Enabler etc..

In the current storage markets and technology, storage tiers are defined by availability, functionality, performance and costs. In fact data can move up and down tiers as time and business required.

Tier "0" is not new in storage market but for implementation purposes it has been difficult to accommodate because it requires best performance and lowest latency. Enterprise Flash disks (Solid State Disks) capable to meet this requirement. It is possible to get more performance for company most critical applications. The performance can be gained through using Flash drives supported in VMAX and DMX-4 systems. One Flash drive can deliver IOPS equivalent to 30 15K RPM hard disk drives with approximately 1 ms application response time. Flash memory achieves performance and the lowest latency ever available in the enterprise class storage array.

Tier “0” application can be closely coupled with other storage tier within Symetrix series for consistency and efficiency, reducing cost of company for manual data layout or data migration from old disk to new high speed disk.

Tier “0” storage can be used to accelerate online transaction processing, accelerating performance with large indices and frequently accessed database tables i.e. Oracle, DB2 databases and SAP R/3. Tier 0 can also improve performance in batch processing and shorten batch processing in windows environments.

Tier “0” storage performance will help application that needs the lowest latency and response time possible. The following applications can get benefited through using Tier 0 storage:

- Algorithmic trading
- Data modeling
- Trade optimization
- Realtime data/feed processing
- Contextual web advertising
- Other realtime transaction systems
- Currency exchange and arbitrage

Tier “0” storage is most beneficial with random read miss application. If random read miss percentage is low, application will not see any performance difference since writes and sequential reads/writes already leverage Symmetrix cache to achieve the lowest possible response time.

For example, if the read hit percentage is high >90 % as compared to read misses, such application like DSS, Streaming media, improvements provided by Tier 0 storage will not likely be enough to be cost-effective.

Think when you are creating a point-in-time image for multiple devices. It is easy to create a point-in-time image of entire set of logical device at same time. In order achieve this you need to shut down an application so that no IO will occurs while you creating a point-in-time image. This is big problem in today’s environment where every company looking solution for zero down time.
The EMC provided solution to this problem is called “Enginuity Consistency Assist”. When you create a set of sessions and invoke Enginuity Consistency Assist, the Symmetrix aligns the I/O of those devices and halts all I/O from the host systems very briefly—much faster than the applications can detect—while it creates the session. It then resumes normal operation without any application impact.
TimeFinder Consistent Split using (TimeFinder/Consistency Groups) allows the splitting off of a consistent, re-startable image of an Oracle database instance within seconds with no interruption to the online Oracle database instance.
Ÿ - Allows users to split off a dependent write consistent, re-startable image of application without interrupting online services
Ÿ - Using TimeFinder/Consistency Groups to defer write I/O at the Symmetrix before a split
Ÿ - Consistent split can be performed by any host running Solutions Enabler connected to the Symmetrix
Ÿ - Tested and available including HP-UX, Solaris, AIX, Linux, and Windows
Ÿ - No database shutdown or requirement to have database put into backup mode (Oracle).

Using TF/CG, consistent splits helps to avoid inconsistencies and restart problems that can occur with using Oracle hot-backup mode (not quiescing the database).
The major benefits of TF/CG are:
• No disruption to the online Oracle database to obtain a Point-in-Time image
• Provides a consistent, re-startable image of the Oracle database for testing new versions or database patch updates before deploying for use in production environments
• Can be used to obtain a business point of consistency for business restart requirements for which Oracle has been identified as one of multiple databases for such an environment.

The same benefits apply using TF/CG in a clustered environment as in a non-clustered environment:
- No disruption to the online Oracle database to obtain a Point-in-Time image in a Oracle single instance environment or when using Oracle Real Application Clusters
- Provides a consistent, re-startable image of the Oracle database for testing new versions or database patch updates before deploying for use in clustered production environments
- Can be used to obtain a business point of consistency for business restart requirements for which Oracle has been identified as one of multiple databases for such an environment.

Auto-provisioning requires Enginuity 5874 or later. It simplify Symmetrix provisioning by allowing you to create group of devices like storage group in CLARiiON, Front-End Port Group and Host Initiators Group and then associate these groups with each other in a masking view.

The following are the basic steps for provisioning Symmetrix using Auto-Provisioning:-

1) Create a Storage Group
2) Create a Port Group
3) Create an Initiator Group
4) Associate the groups in a Masking View.


Creating Storage Group:- It is component of Auto-Provisioning group and FAST ( Will discuss about FAST in later post), both require Enginiuity 5874. The maximum number of storage group allowed per array is 8192. A storage group can contain up to 4096 devices. A Symmetrix device can belong to more than one storage group.

Note:- By default Dynamic LUN addresses will assigned to each device. If can manually assign the host LUN addresses for the device you are adding to the group by clicking Set LUN Address- Storage group dialog box.

Creating Port Group:- A port can belong to more than one port group and port must have the ACLX bit enabled. For example if you want FA 5A and 12 A for windows operating system, you can create port group name called WIN_PortGrp or Win_FA5A_FA12A_PrtGrp etc.

Creating Initiator Group:- The maximum number of initiator groups allowed per Symmetrix array is 8000. An initiator group can contain up to 32 initiator of any type and contain other initiator groups (cascaded to only one level).

Initiator Group name must be unique from other initiator groups on the array and cannot exceed 64 characters. Initiator group names are case-insensitive.

Creating Masking view:- It just a co-relation between Storage Group, Port Group and Initiator Group and you are done! Device will be mapped automatically to selected port group and masked to selected initiator groups.

SRDF Pair Status

Posted by Diwakar ADD COMMENTS

SRDF/S and SRDF/A configuration involves tasks such as suspending and resuming the replication, failover from R1 side to R2, restoring R1 or R2 volumes from their BCV, and more. You perform these and other SRDF/S or SRDF/A operations using both symrdf and TimeFinder command symmir. The below details are for SRDF Pair states during SRDF procedure.

SyncInProg :- A synchronization is currently in progress between the R1 and the R2. There are existing invalid tracks between the two pairs and the logical link between both sides of an RDF pair is up.

Synchronized :- The R1 and the R2 are currently in a synchronized state. The same content exists on the R2 as the R1. There are no invalid tracks between the two pairs.

Split :- The R1 and the R2 are currently Ready to their hosts, but the link is Not Ready or Write Disabled.

Failed Over :- The R1 is currently Not Ready or Write Disabled and operations been failed over to the R2.

R1 Updated :- The R1 is currently Not Ready or Write Disabled to the host, there are no local invalid tracks on the R1 side, and the link is Ready or Write Disabled.

R1 UpdInProg :- The R1 is currently Not Ready or Write Disabled to the host, there are invalid local (R1) tracks on the source side, and the link is Ready or Write Disabled.
Suspended :- The RDF links have been suspended and are Not Ready or Write Disabled. If the R1 is Ready while the links are suspended, any I/O accumulates as invalid tracks owed to the R2.

Partitioned :- The SYMAPI is currently unable to communicate through the corresponding RDF path to the remote Symmetrix. Partitioned may apply to devices within an RA group. For example, if SYMAPI is unable to communicate to a remote Symmetrix via an RA group, devices in that RA group are marked as being in the Partitioned state.

Mixed :- Mixed is a composite SYMAPI device group RDF pair state. Different SRDF pair states exist within a device group.

Invalid :- This is the default state when no other SRDF state applies. The combination of R1, R2, and RDF link states and statuses do not match any other pair state. This state may occur if there is a problem at the disk director level.

Consistent :- The R2 SRDF/A capable devices are in a consistent state. Consistent state signifies the normal state of operation for device pairs operating in asynchronous mode.

Symmetrix Optimizer improves array performance by continuously monitoring access patterns and migrating devices to achieve balance across the disks in the array. This is process is carried out automatically based on user-defined parameters and is completely transparent to end users, hosts and applications in the environment. Migration is performed with constant data availability and consistent protection.

Optimizer performs self-tuning of Symmetrix data configurations from the Symmetrix service processor by:
· Analyzing statistics about Symmetrix logical device activity.
· Determining which logical devices should have their physical locations swapped to enhance Symmetrix performance.
· Swapping logical devices and their data using internal Dynamic Reallocation Volumes (DRVs) to hold customer data while reconfiguring the system (on a device-to-device basis).

Symmetrix Optimizer can be utilized via EMC Symmetrix Management Console or SYMCLI, where user can defines the following:

1) Symmetrix device to be optimized
2) Priority of those devices.
3) Window of time that profiles the business workload.
4) Window of time in which Optimizer is allowed to swap.
5) Additional business rules.
6) The pace of the Symmetrix Optimizer volume copy mechanism.

After being initialized with the user-defined parameters, Symmetrix Optimizer operates totally autonomously on the Symmetrix service processor to perform the following steps.

1) Symmetrix Optimizer builds a database of device activity statistics on the Symmetrix back end.

2) Using the data collected, configuration information and user-defined parameters, the Optimizer algorithm identifies busy and idle devices and their locations on the physical drives. The algorithm tries to minimize average disk service time by balancing I/O activity across physical disk by locating busy devices close to each other on the same disk, and by locating busy devices on faster areas of the disks. This is done by taking into account the speed of the disk, the disk geometry and the actuator speed.

3) Once a solution for load balancing has been developed the next phase to carry out the Symmetrix device swaps. This is don using established EMC Timefinder technology, which maintains data protection and availability. Users can specify if swaps should occur in completely automated fashion or if the user is required to approve Symmetrix device swaps before the action is taken.

4) Once the swap function is complete, Symmetrix Optimizer continues data analysis for the next swap.

How Symmetrix Optimizer works:-

1) Automatically collects logical device activity data, based upon the devices and time window you define.

2) Identifies “hot” and “cold” logical devices, and determines on which physical drives they reside.

3) Compares physical drive performance characteristics, such as spindle speed, head actuator speed, and drive geometry.

4) Determines which logical device swaps would reduce physical drive contention and minimize average disk service times.

5) Using the Optimizer Swap Wizard, swaps logical devices to balance activity across the back end of the Symmetrix array.

Optimizer is designed to run automatically in the background, analyzing performance in the performance time windows you specify and performing swaps in the swap time windows you specify.

A multipath requirement for different storage arrays:-
All storage arrays: - Write cache must be disabled if not battery backed.
Topology: - No single failure should cause both HBA and SP failover, especially with active-passive storage arrays.

IBM TotalStorage DS 400 Family (formely FastT) –

Defaul host type must be LNXCL or Host Type must be LNXCL or
AVT (Auto Volume Transfer) is disabled in this host mode.

HDS 99xx and 95xx family – HDS 9500V family (Thunder)- Requires two host modes:
Host mode 1 – standard
Host mode 2 – Sun Cluster
HDS 99xx family Lightning and HDS Tamba USP requires host mode set to Netware.

EMC Symmetrix :- Enable the SPC2 and SC3 settings.

EMC CLARiiON – All initiator records must have

- Fail-over Mode = 1
- Initiator Type = “CLARiiON Open”
- Array CommPath = “Enabled” or 1

HP EVA :- For EVA3000/5000 firmware 4.001 and above and EVA 4000/6000/8000 firmware 5.031 and above, set the host type to VMWare. Otherwise, set the host mode type to custom. The value is :
EVA3000/5000 firmware 3.x: 000000002200282E
EVA4000/6000/8000: 000000202200083E

HP XP:- For XP 128/1024/10000/12000, the host mode should be set to 0C (Windows), that is, zeroC (Windows).

NetApp :- No specific requirements

ESX Server Configuration :- Set the following Advanced Settings for the ESX Server host:-

Set Disk.UseLunReset to 1
Set Disk.UseDeviceReset to 0
A multipathing policy of Most Recently Used must be set for all LUNs hosting clustered disks for active-passive arrays. A multipathing policy of Most Recently Used or Fixed may be set for LUNs on active-active arrays. All FC HBAs must be of the same model.

About Me

My photo
Sr. Solutions Architect; Expertise: - Cloud Design & Architect - Data Center Consolidation - DC/Storage Virtualization - Technology Refresh - Data Migration - SAN Refresh - Data Center Architecture More info:- diwakar@emcstorageinfo.com
Blog Disclaimer: “The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.”
EMC Storage Product Knowledge Sharing