When connecting to an ESX host for the first time, rpowermt prompts the administrator for the username and password of the ESX host. Each new ESX host managed by rpowermt generates a prompt a username and password. The administrator is prompted for a lockbox password once, rpowermt securely stores the following information in a lockbox(encrypted
file).


-         --ESX host being accessed


-     --Username and password of ESX host being accessed,


Storing the host and passwords in a lockbox enables them to be mainitained across system reboots. If a lockbox is copied from one rpowermt server to anotherm, the user is prompted to enter the lockbox password again.


PowerPath/VE for vSphere rpowermt commands do not require root access to run the executable; however, the ESX root password is required as an entry in the lockbox, Access to rpowermt command executable is based on the native access controls of the server (Windows or RHEL) where rpowermt is installed.

PowerPath/VE  5.4 and supported ESX/ESXi versions




EMC
PowerPath/VE release

Supported VMware ESX/ESXi version

ESX
4.0

ESXi
4.0

ESX
4.0 U1/U2

ESXi
4.0 U1/U2

ESX
4.1

ESXi
4.1

5.4

Yes
- vCLI install only

Yes
- vCLI install only

Yes
- vCLI install only

Yes
- vCLI install only

No

No

5.4
SP1

Yes
- vCLI install only

Yes
- vCLI install only

Yes
- Update Manager and vCLI install

Yes
- Update Manager and vCLI install

Yes
- vCLI install only

No



Procedures:

Note:- Please check ESM for prequisites for PPVE before installing/upgrading.


& 1) vSphere hosts must be part of a DRS cluster.


2) 2) vSphere hosts must have shared storage. If there is not shared storage, create it.


3) 3) Place the first vSphere host in Maintenance Mode. This will force VMs to fail over to
other cluster members using Vmotion.


4)  4)  Install PowerPath/VE on the first vSphere host using vCLI.


Using Vmware Remote Tool (vCLI) on server, install PowerPath/VE on maintenance mode server, use the following command:



vihostupdate --server “server-IP-Address” --install --bunde=/<path>/EMCPower.VMWARE.5.4.bxxx.zip


Note:- Use vihostupdate.pl for Windows



Once the command completes, verify that package is installed. Use the following command.



vihostupdate --server  “server-IP-Address” –query

5) If necessary, make changes to the first vSphere host’s claim rules.


6)   6) Exit Maintenance Mode.


7)   7)  Reboot the first vSphere host. (Wait host to come online before proceeding next steps.)


8)   8)  Place the second vSphere host in Maintenance Mode. This will force VMs to fail over to
other clusters member using Vmotion.


9)   9)  Install PowerPath/VE on the second vSphere host using vCLI.


10)   If necessary, make changes to the second vSphere host’s claim rules.


11)   Exit Maintenance Mode.


12)   Reboot the second vSphere host.


13)   Perform the same operation for remaining hosts in the cluster.


14)   After PowerPath/VE installation has completed for every node in the cluster, rebalance the VMs across the cluster.

EMC Unified Storage is 20% more efficient that the competition. Even Erik Estrada from CHiPS knows that now.It's simple to be efficient with EMC

EMC unified storage makes it simple for you to contain information growth and satisfy the needs of your data-hungry virtual machines while requiring 20% less raw capacity. This works to your advantage when comparing EMC to other unified storage arrays. If, for some unlikely reason, EMC isn’t 20% more efficient, we'll match the shortfall with additional storage—that's how confident we are.
Find out more :- http://bit.ly/EMCGuaranteeST

EMC Unisphere - Next generation storage management that provides single and simple interface for both current and future clariion and celerra series. Unisphere provides capability to integrate other element manager and provides built in support tool, software download and live chat etc. Note :-  Unisphere features will be available in Q3 2010:

Check out demo about EMC Unisphere, presented by Bob Abraham :

           


• Task-based navigation and controls offer an intuitive, context-based approach to configuring storage, creating replicas, monitoring the environment, managing host connections, and accessing the Unisphere Support ecosystem.

Self-service Unisphere support ecosystem is accessible with 1-click from Unisphere, providing users with quick access to “real-time” support tools including live chat support, software downloads, product documentation, best practices, FAQs, online customer communities, ordering spares, and submitting service requests.

• Customizable dashboard views and reporting capabilities enable at-a-glance management by automatically presenting end-users with valuable information in context of how they manage storage. For example, customers can develop custom reports 18x faster with EMC Unisphere.

BCV copies can be used for backup, restore, decision support, and application testing., BCV devices contain no data after initial Symmetrix configuration. The full establish operation must be used the first time the standard devices are paired with their BCV devices.

1.  Associate the BCV Device for pairing:

To perform standard/BCV pairing, the standard and BCV mirror devices of your production images must be members of the same device group (Note:- Already discussed in previous post about creating device group and pairing devices). 


To associate BCV001 with device 0ABC,enter:

  symbcv –g DgName –sid SymmID associate dev 0ABC BCV001


Or to associate a range of devices to a device group, enter:


 symbcv –g DgName –sid SymmID associateall dev –RANGE 0ABC:0DEF


Note:- -sid SymmID is optional if you have already defined device group in symcli environment varaiable.


2. Unmount the BCV device:


Prior to using devices for BCV operations, the BCV device should be Windows formatted and assigned a drive letter.


If using basic disks on the Windows platform, you must unmount the BCV devices. If using dynamic disks, you must deport the entire TimeFinder device group.  For basic disks, use the syminq command to determine the SymDevName of the potential BCV device. For dynamic disks,use the TimeFinder  symntctl command to determine the volume and disk group name as follows:


symntctl list –volume [dg DgName]


Note that the term device group and dynamic disk group are the same applied to this command.


Unmount the selected BCV device as follows (with TimeFinder command):


symntctl unmount –drive z


Where z equals the designated drive letter.  If an error occurs, check for an “openhandle” and clear this condition.


For Veritas dynamic disks only, you must deport the disk group and rescan using the following commands:


vxdg deport –g DgName


symntctl rescan


3. Fully Establish BCV and STD:


To obtain a copy of the data on a standard device, the BCV device of the pair must be established.


To initiate a full establish on a specific standard/BCV device pair, target the standard device:


symmir –g DgName –full establish DEV001


Fully Establish all pairs in a group. To initiate a full establish on all BCV pairs in a device group, enter:


symmir –g DgName –full establish


Verify the completed (synchronized) establish operation. To verify when the BCV pairs reach the full copied or Synchronized state, use the verify action as follows:


symmir –g DgName –i 20 verify


With this interval and count, the message is displayed every 20 seconds until the pair is established.


Rescan the drive connections. For Dynamic disks only on a Windows host, you should rescan for drive connections visible to the host:


symntctl rescan


After any standard/BCV pair has been fully established and subsequently split, to save establish (resync) time, you can perform an establish operation omitting the -full option, which updates the BCV copy with only the changed tracks that occurred on the standard device during the elapsed BCV split time. To perform an incremental establish, omit the –full option, targeting the standard device
of the pair:


symmir -g DgName establish DEV001


Optionally, you can also collectively target all devices in a device group, composite group, or defined devices in a device file:


  symmir –g DgName establish [-full]


 symmir –g CgName establish [-full]


  symmir –file FileName establish [-full]


4. Prepare (freeze) Production database for a TimeFinder Split:


To prepare to split the synchronized BCV device from the production standard device, you must suspend I/O at the application layer or unmount the production standard prior to executing the split operation.


symioctl freeze –type DbType [object]


Ensure any residual cache on the Production host is fully flushed to disk. To insure all pending unwritten production file system entries are captured, enter TimeFinder command:


symntctl flush –drive z


Wait 30 to 60 seconds for the flush operation to complete.   


5. Split the BCV devices:


To split all the BCV devices from the standard devices in the production device group, enter:


symmir –g DgName split


To split a specific standard/BCV pair, target the logical device name in the group, enter:


symmir –g DgName split DEV001


6. Verify the split operation completes:


To verify when the BCV device is completely split from the standard, use the verify action as follows:


symmir –g DgName –i 20 verify –split -bg


With this interval and count, the message is displayed every 20 seconds until the pair is split.


7. Rescan for dynamic disks:


For dynamic disks only, you should rescan for drive connections visible to the host:


symntctl rescan


8. Mounting BCV device:


After splitting the BCV device, you can mount the device with captured data on another host and reassign the drive letter.


For basic disks, use the TimeFinder command:


symntctl  mount –drive z –dg DgName


For Dynamic disks, use the TimeFinder command:


 symntctl mount –drive z –vol VolName   –dg DgName | -guid VolGuid


For VERITAS dynamic disk only, you must deport the disk group and rescan, as follows:


vxdg deport –dg DgName


symntctl rescan


For Dynamic disk only (without Veritas), you can use the Microsoft diskpart command to select the disk and import the device using the online and import actions.


Note:- symntctl command available in TimeFinder/IM ( Integration Module).

VMware hosts require few mandatory FA bits setting before SAN storage to be provision. Apart from FA bits a series of procedure require from installing HBAs, HBA Firmware and drivers, zoning, mapping, masking devices, to configure kernel files and devices.

Let’s assume we have already identified Symmetrix FA port for VMware host and completed zoning on switch. It is better to have separate FA pair for VMware host. (You can connect VMware host to 2 pair FA if you have enough FA resources available and going to deploy critical application which require more performance).

You can identify the FA port available on Symmetrix:

symcfg list –connections.


Verify port flag settings-

symcfg list –fa -p -v

( FA-Number and Port where your host connected/zoned)


The following FA bits/flag require being set/Enable:

                     i)    Common Serial Number (C)


                    ii)    VCM State (VCM) --- (ACLX for V-MAX)


                    iii)    SCSI 3 (SC3)


                    iv)    SPC 2


                    v)     Unique World Wide Name (UWWN)


                   vi)     Auto-negotiation (EAN)


                   vii)    Point to Point (P)

Note :- FA bit/flag requirement may vary depending on Symmetrix, but most of times you require to enable above bit for VMware host.

Create a command file for setting FA port flags, call it faflags.cmd with the below entry:

# For C-Bit

set port FA:Port Common_Serial_Number=enable;


set port FA:Port Common_Serial_Number=enable;

# For VCM-Bit

set port FA:Port VCM_State=enable;


set port FA:Port VCM_State=enable;

# For SC3-Bit

set port FA:Port scsi_3=enable;


set port FA:Port scsi_3=enable;

# For SP-2-Bit

set port FA:Port SPC2_Protocol_Version=enable;


set port FA:Port SPC2_Protocol_Version=enable;


# For UWWN-Bit

set port FA:Port Unique_WWN=enable;


set port FA:Port Unique_WWN=enable;

# For EAN-Bit

set port FA:Port Auto_Negotiate=enable;


set port FA:Port Auto_Negotiate=enable;

# For PTOP-Bit

set port FA:Port Init_Point_to_Point=enable;


set port FA:Port Init_Point_to_Point=enable;

Once you prepare command file, you can commit the file:

symconfigure –sid preview –f  faflags.cmd

Verify port flag settings once again, required FA flags should have be enabled by now-

symcfg list –fa -p -v

You are ready to provision SAN storage for VMware host now…

CLARiiON Flare release 29 (04.29.000.5.001) introduce support for several new features as follow:

1) Virtualization-aware Navisphere Manager - Discovery of VMware client always were difficult in earlier release but Flare 29 enables CLARiiON CX4 users and VMware administrators to reduce infrastructure reporting time from hours to minutes. Earlier releases have allowed only a single IP address to be assigned to each iSCSI physical port. With Flare 29, the ability to define multiple virtual iSCSI ports on each physical port has been added along with the ability to tag each virtual port with unique VLAN tag. VLAN tagging has also been added to the single Management Port interface. It should be noted that the IP address and VLAN tag assignments should be carefully coordinated with those supporting the network infrastructure where the storage system will operate

2) Built-in policy-based spin down of idle SATA II drives for CLARiiON CX4 - Lowers power requirements in environments such as test and development in physical and virtual environments. Features include a simple management via a “set it and forget it” policy, complete spin down of inactive drives during times of zero I/O activity, and drives automatically spin back up after a "first I/O" request is received.

3) Virtual Provisioning Phase 2 - Support for MirrorView and SAN Copy replication on thin LUNs has been added.

4) Search feature – Provides users with the ability to search for a wide-variety of objects across their storage systems. Objects can be either logical (e.g., LUN) or physical (e.g., disks).

5) Replication roles - Three additional roles have to be added in Navisphere: “Local Replication Only”, “Replication” and “Replication/ Recovery”.

6) Dedicated VMware software files - VMware software files (i.e. NaviSecCLI, Navisphere Initialization Wizard) are now separate from those of the Linux Operating System.

7) Software filename standardization - all CLARiiON software filenames beginning with FLARE Release 29

8) Changing SP IP addresses - SP IP addresses can now be changed without rebooting the SP. Only the Management Sever will need to be rebooted from the Setup page, which results in no storage system downtime.

9) Linux 64-bit server software – Native 64-bit Linux server software files simplify installation by eliminating the need to gather and load 32-bit DLLs.

10) Solaris x64 Navisphere Host Agent – Release 29 marks the introduction of Solaris 64-bit Navisphere Host Agent software. This Host Agent is backward compatible with older FLARE release.


Lets understand EMC Open Replicator product:- Open Replicator enables remote point-in-time copies with full or incremental copy capabilities to be used for high-speed data mobility, remote vaulting, migration, and distribution between EMC Symmetrix DMX and other qualified storage arrays. OR leverages the high-end Symmetrix storage architecture and offers unmatched deployment flexibility and massive scalability.

EMC Open Replicator is tightly integrated with EMC TimeFinder and SRDF families of local and remote solutions. Open Replicator Functionality:

- Protect lower-tier applications at remote locations.

- Pushing or Pulling data from Symmetrix DMX arrays to other qualified storage arrays in SAN/WAN environments.

- Create remote point-in-time copies of local production volumes for many purposes from data vaulting to remote testing and development.

- Ensure immediate data availability to host applications via Symmetrix DMX consistency technologies.

When to use EMC Open Replicator for migration:


1) SYMAPI cannot validate third party or non-visible storage systems.
2) To protect against potential data loss due to a SAN failure or other connectivity issue during a hot pull operation, use the donor update option. When enabled, this feature causes all writes to the control device from the host to be immediately copied to the remote device as well. Because the data is fully copied to both the remote device and the control devices, if a failure occurs, the session can safely be terminated and created again to fully recover from any mid-copy failure.
3) Open Replicator uses FA resources. If you are using this utility in a production environment, verify with the SA that FA bandwidth assessments have been considered and that appropriate throttling parameters (pace or ceiling).
4) When using BCVs, the BCVs must be “visible” to the remote storage array. Thus, they must be mapped to an FA and the FA must be zoned to the destination storage. We highly recommend that BCVs not be mapped to the same FA as the control standard devices to avoid a negative impact on host I/O performance.
5) If a configuration uses thin device as destination in a pull or push copy operation, full volume allocation of the thin device will be made because Open Replicator creates a full volume copy.
6) When performing an Open Replicator migration, always use the –v qualifier on the create command. This insures that, should the session fail, there will be useful information returned as to what volume caused the error. This allows you to more quickly recognize zoning or masking issues.
7) Issuing create commands prior to Open Replicator migration activities allows confirmation that there will be no zoning or masking issues discovered during the migration window. This technique will only be successful if no changes have been made to the Symmetrix environment between issuance of the create and copy commands.
8) It is better and easier to use an Open Replicator management host for preparing, executing, and monitoring migration sessions than using one of the systems with volumes involved in the activity.
9) For Veritas file systems, PowerPath devices, and Oracle databases, when you activate the copy session devices must be frozen just before the activate is performed, and thawed as soon as the activate completes. Use the following options to with the symrcopy activate command, when applicable:
-vxfs MountPoint
-ppath srcdevs

PowerPath device

-rdb dbtype DbType -db DbName

10) The device specified in the command line must match the device in the device file or the activate will fail.

PowerPath Migrator Enabler is a host-base migration tools from EMC that allows you to migrate data between storage systems with little or no interruption to data access. This tool can be use in conjunction with other underlying technology such as EMC Invista, Open Replicator. PPME use the PowerPath filter dirvers to provide non-disruptive or minimally disruptive migrations. Only specific host plateforms are supported by PPME. Please check EMC support matrix for supported host systems. One of the PPME features that supports pseduo-to-pseduo, native-to-native and native-to-pseudo device migration.

Consider the following when designing and configuring PPME:

Ø Remote devices do not have to be the same RAID type or meta-configuration.
Ø Target devices must be the same size or large than the source control device.
Ø Target directors act as initiators in the SAN.
Ø Contrary to the recommendations for Open Replicator, the source device remains online during the “hot pull.”
Ø The two storage systems involved in the migration must be connected directly or through a switch, and they must be able to communicate.
Ø Every port on the target array that allows access to the target device must also have access to the source device through at least one port on the source array. This can be counter to some established zoning policies.
Ø Since PPME with Open Replicator uses FA resources, determine whether this utility will be used in a production environment. In addition, consider FA bandwidth assessments so that appropriate throttling parameters (that is, pace or ceiling).
Ø The powermig throttle parameter sets the pace of an individual migration by using the pace parameter of Open Replicator:
Ø Faster (lower throttle) makes the migration faster, but may impact application I/O performance.
Ø Slower (higher throttle) makes the migration slower.
Ø The default is five (midpoint).
Ø When setting a ceiling to limit for Open Replicator throughput for a director/port:
Ø The ceiling value is set as a percentage of a director/port’s total capacity.
Ø The ceiling can be set for a given director, port, director and port, or all director and ports in the Symmetrix array.
Ø To set ceiling values, you must use symrcopy set ceiling directly (powermig does not provide a way to do this)
Ø Once the hot pull has completed, remove or re-use the source device.
Ø Do not forget to “clean up” the zoning once you have completed migration activities.
Hope this will be useful in migration planning or selecting migration tools. I will try to explain in deatil in coming post such as PPME with Open Replicator, Solution Enabler etc..

In the current storage markets and technology, storage tiers are defined by availability, functionality, performance and costs. In fact data can move up and down tiers as time and business required.

Tier "0" is not new in storage market but for implementation purposes it has been difficult to accommodate because it requires best performance and lowest latency. Enterprise Flash disks (Solid State Disks) capable to meet this requirement. It is possible to get more performance for company most critical applications. The performance can be gained through using Flash drives supported in VMAX and DMX-4 systems. One Flash drive can deliver IOPS equivalent to 30 15K RPM hard disk drives with approximately 1 ms application response time. Flash memory achieves performance and the lowest latency ever available in the enterprise class storage array.

Tier “0” application can be closely coupled with other storage tier within Symetrix series for consistency and efficiency, reducing cost of company for manual data layout or data migration from old disk to new high speed disk.

Tier “0” storage can be used to accelerate online transaction processing, accelerating performance with large indices and frequently accessed database tables i.e. Oracle, DB2 databases and SAP R/3. Tier 0 can also improve performance in batch processing and shorten batch processing in windows environments.

Tier “0” storage performance will help application that needs the lowest latency and response time possible. The following applications can get benefited through using Tier 0 storage:

- Algorithmic trading
- Data modeling
- Trade optimization
- Realtime data/feed processing
- Contextual web advertising
- Other realtime transaction systems
- Currency exchange and arbitrage

Tier “0” storage is most beneficial with random read miss application. If random read miss percentage is low, application will not see any performance difference since writes and sequential reads/writes already leverage Symmetrix cache to achieve the lowest possible response time.

For example, if the read hit percentage is high >90 % as compared to read misses, such application like DSS, Streaming media, improvements provided by Tier 0 storage will not likely be enough to be cost-effective.

Think when you are creating a point-in-time image for multiple devices. It is easy to create a point-in-time image of entire set of logical device at same time. In order achieve this you need to shut down an application so that no IO will occurs while you creating a point-in-time image. This is big problem in today’s environment where every company looking solution for zero down time.
The EMC provided solution to this problem is called “Enginuity Consistency Assist”. When you create a set of sessions and invoke Enginuity Consistency Assist, the Symmetrix aligns the I/O of those devices and halts all I/O from the host systems very briefly—much faster than the applications can detect—while it creates the session. It then resumes normal operation without any application impact.
TimeFinder Consistent Split using (TimeFinder/Consistency Groups) allows the splitting off of a consistent, re-startable image of an Oracle database instance within seconds with no interruption to the online Oracle database instance.
Ÿ - Allows users to split off a dependent write consistent, re-startable image of application without interrupting online services
Ÿ - Using TimeFinder/Consistency Groups to defer write I/O at the Symmetrix before a split
Ÿ - Consistent split can be performed by any host running Solutions Enabler connected to the Symmetrix
Ÿ - Tested and available including HP-UX, Solaris, AIX, Linux, and Windows
Ÿ - No database shutdown or requirement to have database put into backup mode (Oracle).

Using TF/CG, consistent splits helps to avoid inconsistencies and restart problems that can occur with using Oracle hot-backup mode (not quiescing the database).
The major benefits of TF/CG are:
• No disruption to the online Oracle database to obtain a Point-in-Time image
• Provides a consistent, re-startable image of the Oracle database for testing new versions or database patch updates before deploying for use in production environments
• Can be used to obtain a business point of consistency for business restart requirements for which Oracle has been identified as one of multiple databases for such an environment.

The same benefits apply using TF/CG in a clustered environment as in a non-clustered environment:
- No disruption to the online Oracle database to obtain a Point-in-Time image in a Oracle single instance environment or when using Oracle Real Application Clusters
- Provides a consistent, re-startable image of the Oracle database for testing new versions or database patch updates before deploying for use in clustered production environments
- Can be used to obtain a business point of consistency for business restart requirements for which Oracle has been identified as one of multiple databases for such an environment.

Auto-provisioning requires Enginuity 5874 or later. It simplify Symmetrix provisioning by allowing you to create group of devices like storage group in CLARiiON, Front-End Port Group and Host Initiators Group and then associate these groups with each other in a masking view.

The following are the basic steps for provisioning Symmetrix using Auto-Provisioning:-

1) Create a Storage Group
2) Create a Port Group
3) Create an Initiator Group
4) Associate the groups in a Masking View.


Creating Storage Group:- It is component of Auto-Provisioning group and FAST ( Will discuss about FAST in later post), both require Enginiuity 5874. The maximum number of storage group allowed per array is 8192. A storage group can contain up to 4096 devices. A Symmetrix device can belong to more than one storage group.

Note:- By default Dynamic LUN addresses will assigned to each device. If can manually assign the host LUN addresses for the device you are adding to the group by clicking Set LUN Address- Storage group dialog box.

Creating Port Group:- A port can belong to more than one port group and port must have the ACLX bit enabled. For example if you want FA 5A and 12 A for windows operating system, you can create port group name called WIN_PortGrp or Win_FA5A_FA12A_PrtGrp etc.

Creating Initiator Group:- The maximum number of initiator groups allowed per Symmetrix array is 8000. An initiator group can contain up to 32 initiator of any type and contain other initiator groups (cascaded to only one level).

Initiator Group name must be unique from other initiator groups on the array and cannot exceed 64 characters. Initiator group names are case-insensitive.

Creating Masking view:- It just a co-relation between Storage Group, Port Group and Initiator Group and you are done! Device will be mapped automatically to selected port group and masked to selected initiator groups.

SRDF Pair Status

Posted by Diwakar ADD COMMENTS

SRDF/S and SRDF/A configuration involves tasks such as suspending and resuming the replication, failover from R1 side to R2, restoring R1 or R2 volumes from their BCV, and more. You perform these and other SRDF/S or SRDF/A operations using both symrdf and TimeFinder command symmir. The below details are for SRDF Pair states during SRDF procedure.

SyncInProg :- A synchronization is currently in progress between the R1 and the R2. There are existing invalid tracks between the two pairs and the logical link between both sides of an RDF pair is up.

Synchronized :- The R1 and the R2 are currently in a synchronized state. The same content exists on the R2 as the R1. There are no invalid tracks between the two pairs.

Split :- The R1 and the R2 are currently Ready to their hosts, but the link is Not Ready or Write Disabled.

Failed Over :- The R1 is currently Not Ready or Write Disabled and operations been failed over to the R2.

R1 Updated :- The R1 is currently Not Ready or Write Disabled to the host, there are no local invalid tracks on the R1 side, and the link is Ready or Write Disabled.

R1 UpdInProg :- The R1 is currently Not Ready or Write Disabled to the host, there are invalid local (R1) tracks on the source side, and the link is Ready or Write Disabled.
Suspended :- The RDF links have been suspended and are Not Ready or Write Disabled. If the R1 is Ready while the links are suspended, any I/O accumulates as invalid tracks owed to the R2.

Partitioned :- The SYMAPI is currently unable to communicate through the corresponding RDF path to the remote Symmetrix. Partitioned may apply to devices within an RA group. For example, if SYMAPI is unable to communicate to a remote Symmetrix via an RA group, devices in that RA group are marked as being in the Partitioned state.

Mixed :- Mixed is a composite SYMAPI device group RDF pair state. Different SRDF pair states exist within a device group.

Invalid :- This is the default state when no other SRDF state applies. The combination of R1, R2, and RDF link states and statuses do not match any other pair state. This state may occur if there is a problem at the disk director level.

Consistent :- The R2 SRDF/A capable devices are in a consistent state. Consistent state signifies the normal state of operation for device pairs operating in asynchronous mode.

Symmetrix Optimizer improves array performance by continuously monitoring access patterns and migrating devices to achieve balance across the disks in the array. This is process is carried out automatically based on user-defined parameters and is completely transparent to end users, hosts and applications in the environment. Migration is performed with constant data availability and consistent protection.

Optimizer performs self-tuning of Symmetrix data configurations from the Symmetrix service processor by:
· Analyzing statistics about Symmetrix logical device activity.
· Determining which logical devices should have their physical locations swapped to enhance Symmetrix performance.
· Swapping logical devices and their data using internal Dynamic Reallocation Volumes (DRVs) to hold customer data while reconfiguring the system (on a device-to-device basis).

Symmetrix Optimizer can be utilized via EMC Symmetrix Management Console or SYMCLI, where user can defines the following:

1) Symmetrix device to be optimized
2) Priority of those devices.
3) Window of time that profiles the business workload.
4) Window of time in which Optimizer is allowed to swap.
5) Additional business rules.
6) The pace of the Symmetrix Optimizer volume copy mechanism.

After being initialized with the user-defined parameters, Symmetrix Optimizer operates totally autonomously on the Symmetrix service processor to perform the following steps.

1) Symmetrix Optimizer builds a database of device activity statistics on the Symmetrix back end.

2) Using the data collected, configuration information and user-defined parameters, the Optimizer algorithm identifies busy and idle devices and their locations on the physical drives. The algorithm tries to minimize average disk service time by balancing I/O activity across physical disk by locating busy devices close to each other on the same disk, and by locating busy devices on faster areas of the disks. This is done by taking into account the speed of the disk, the disk geometry and the actuator speed.

3) Once a solution for load balancing has been developed the next phase to carry out the Symmetrix device swaps. This is don using established EMC Timefinder technology, which maintains data protection and availability. Users can specify if swaps should occur in completely automated fashion or if the user is required to approve Symmetrix device swaps before the action is taken.

4) Once the swap function is complete, Symmetrix Optimizer continues data analysis for the next swap.

How Symmetrix Optimizer works:-

1) Automatically collects logical device activity data, based upon the devices and time window you define.

2) Identifies “hot” and “cold” logical devices, and determines on which physical drives they reside.

3) Compares physical drive performance characteristics, such as spindle speed, head actuator speed, and drive geometry.

4) Determines which logical device swaps would reduce physical drive contention and minimize average disk service times.

5) Using the Optimizer Swap Wizard, swaps logical devices to balance activity across the back end of the Symmetrix array.

Optimizer is designed to run automatically in the background, analyzing performance in the performance time windows you specify and performing swaps in the swap time windows you specify.

A multipath requirement for different storage arrays:-
All storage arrays: - Write cache must be disabled if not battery backed.
Topology: - No single failure should cause both HBA and SP failover, especially with active-passive storage arrays.

IBM TotalStorage DS 400 Family (formely FastT) –

Defaul host type must be LNXCL or Host Type must be LNXCL or
AVT (Auto Volume Transfer) is disabled in this host mode.

HDS 99xx and 95xx family – HDS 9500V family (Thunder)- Requires two host modes:
Host mode 1 – standard
Host mode 2 – Sun Cluster
HDS 99xx family Lightning and HDS Tamba USP requires host mode set to Netware.

EMC Symmetrix :- Enable the SPC2 and SC3 settings.

EMC CLARiiON – All initiator records must have

- Fail-over Mode = 1
- Initiator Type = “CLARiiON Open”
- Array CommPath = “Enabled” or 1

HP EVA :- For EVA3000/5000 firmware 4.001 and above and EVA 4000/6000/8000 firmware 5.031 and above, set the host type to VMWare. Otherwise, set the host mode type to custom. The value is :
EVA3000/5000 firmware 3.x: 000000002200282E
EVA4000/6000/8000: 000000202200083E

HP XP:- For XP 128/1024/10000/12000, the host mode should be set to 0C (Windows), that is, zeroC (Windows).

NetApp :- No specific requirements

ESX Server Configuration :- Set the following Advanced Settings for the ESX Server host:-

Set Disk.UseLunReset to 1
Set Disk.UseDeviceReset to 0
A multipathing policy of Most Recently Used must be set for all LUNs hosting clustered disks for active-passive arrays. A multipathing policy of Most Recently Used or Fixed may be set for LUNs on active-active arrays. All FC HBAs must be of the same model.

Return code handling for Windows and UNIX The following lists the possible status or error codes that can be returned by the various SYMCLI commands on a Windows or UNIX platform and useful for troubleshooting.

Code Code symbol Description
___________________________________________________
0 CLI_C_SUCCESS CLI -- call completed successfully.
1 CLI_C_FAIL CLI - call failed.
2 CLI_C_DB_FILE_IS_LOCKED- Another process has an exclusive
lock on the Host database file.
3 CLI_C_SYM_IS_LOCKED - Another process has an exclusive
lock on the Symmetrix.
4 CLI_C_NOT_ALL_SYNCHRONIZED NOT - all of the mirrored pairs are in the 'Synchronized' state.
5 CLI_C_NONE_SYNCHRONIZED - NONE of the mirrored pairs are in the 'Synchronized' state.
6 CLI_C_NOT_ALL_UPDATED - - NOT all of the mirrored pairs are in the 'Updated' state.
7 CLI_C_NONE_UPDATED --NONE of the mirrored pairs are in the 'Updated' state.
8 CLI_C_NOT_ALL_PINGED -- NOT all of the remote Symmetrix units can be pinged.
9 CLI_C_NONE_PINGED -- NONE of the remote Symmetrix units can be pinged.
10 CLI_C_NOT_ALL_SYNCHED -- NOT all of the mirrored pairs are in the 'Synchronized' state.
11 CLI_C_NONE_SYNCHED -- NONE of the mirrored pairs are in the 'Synchronized' state.
12 CLI_C_NOT_ALL_RESTORED -- NOT all of the pairs are in the 'Restored' state.
13 CLI_C_NONE_RESTORED -- NONE of the pairs are in the 'Restored' state.
14 CLI_C_NOT_ALL_VALID -- NOT all of the mirrored pairs are in a valid state.
15 CLI_C_NONE_VALID -- NONE of the mirrored pairs are in a valid state.
16 CLI_C_SYM_NOT_ALL_LOCKED -- NOT all of the specified Symmetrix units have an exclusive Symmetrix lock.
17 CLI_C_SYM_NONE_LOCKED --NONE of the specified Symmetrix units have an exclusive Symmetrix lock.
18 CLI_C_ALREADY_IN_STATE --The Device(s) is (are) already in the desired state or mode.
19 CLI_C_GK_IS_LOCKED -- All GateKeeper devices to the Symmetrix unit are currently locked.
20 CLI_C_WP_TRACKS_IN_CACHE -- Operation cannot proceed because the target device has Write Pending I/O in the cache.
21 CLI_C_NEED_MERGE_TO_RESUME --Operation cannot proceed without first performing a merge of the RDF Track Tables.
22 CLI_C_NEED_FORCE_TO_PROCEED --Operation cannot proceed in the current state except if you specify a force flag.
23 CLI_C_NEED_SYMFORCE_TO_PROCEED --Operation cannot proceed in the current state except if you specify a symforce flag.
24 CLI_C_NOT_IN_SYNC -- The Symmetrix configuration and the database file are NOT in sync.
25 CLI_C_NOT_ALL_SPLIT -- NOT all of the mirrored pairs are in the 'Split' state.
26 CLI_C_NONE_SPLIT -- NONE of the mirrored pairs are in the 'Split' state.
27 CLI_C_NOT_ALL_SYNCINPROG -- NOT all of the mirrored pairs are in the 'SyncInProg' state.
28 CLI_C_NONE_SYNCINPROG -- NONE of the mirrored pairs are in the 'SyncInProg' state.
29 CLI_C_NOT_ALL_RESTINPROG -- NOT all of the pairs are in the 'RestInProg' state.
30 CLI_C_NONE_RESTINPROG -- NONE of the pairs are in the 'RestInProg' state.
31 CLI_C_NOT_ALL_SUSPENDED -- NOT all of the mirrored pairs are in the 'Suspended' state.
32 CLI_C_NONE_SUSPENDED -- NONE of the mirrored pairs are in the 'Suspended' state.
33 CLI_C_NOT_ALL_FAILED_OVER -- NOT all of the mirrored pairs are in the 'Failed Over' state.
34 CLI_C_NONE_FAILED_OVER -- NONE of the mirrored pairs are in the 'Failed Over' state.
35 CLI_C_NOT_ALL_UPDATEINPROG -- NOT all of the mirrored pairs are in the 'R1 UpdInProg' state.
36 CLI_C_NONE_UPDATEINPROG -- NONE of the mirrored pairs are in the 'R1 UpdInProg' state.
37 CLI_C_NOT_ALL_PARTITIONED -- NOT all of the mirrored pairs are in the 'Partitioned' state.
38 CLI_C_NONE_PARTITIONED -- NONE of the mirrored pairs are in the 'Partitioned' state.
39 CLI_C_NOT_ALL_ENABLED -- NOT all of the mirrored pairs are in the 'Enabled' consistency state.
40 CLI_C_NONE_ENABLED -- NONE of the mirrored pairs are in the 'Enabled' consistency state.
41 CLI_C_NOT_ALL_SYNCHRONIZED_AND_ENABLED -- NOT all of the mirrored pairs are in the 'Synchronized' rdf state and the 'Enabled' consistency state.
42 CLI_C_NONE_SYNCHRONIZED_AND_ENABLED -- NONE of the mirrored pairs are in the 'Synchronized' rdf state and in the 'Enabled' consistency state.
43 CLI_C_NOT_ALL_SUSP_AND_ENABLED -- NOT all of the mirrored pairs are in the 'Suspended' rdf state and 'Enabled' consistency state.
44 CLI_C_NONE_SUSP_AND_ENABLED -- NONE of the mirrored pairs are in the 'Suspended' rdf state and the 'Enabled' consistency state.
45 CLI_C_NOT_ALL_SUSP_AND_OFFLINE -- NOT all of the mirrored pairs are in the 'Suspended' rdf state and 'Offline' link suspend state.
46 CLI_C_NONE_SUSP_AND_OFFLINE -- NONE of the mirrored pairs are in the 'Suspended' rdf state and the 'Offline' link suspend state.
47 CLI_C_WONT_REVERSE_SPLIT -- Performing this operation at this time will not allow you to perform the next BCV split as a reverse split.
48 CLI_C_CONFIG_LOCKED -- Access to the configuration server is locked.
49 CLI_C_DEVS_ARE_LOCKED -- One or more devices are locked.
50 CLI_C_MUST_SPLIT_PROTECT -- If a device was restored with the protect option, it must be split with the protect option.
51 CLI_C_PAIRED_WITH_A_DRV -- The function can not be performed since the STD device is already paired with a DRV device.
52 CLI_C_PAIRED_WITH_A_SPARE -- NOT all of the Snap pairs are in the 'Copy in progress' state.
53 CLI_C_NOT_ALL_COPYINPROG -- NOT all of the pairs are in the 'CopyInProgress' state.
54 CLI_C_NONE_COPYINPROG --NONE of the pairs are in the 'CopyInProgress' state.
55 CLI_C_NOT_ALL_COPIED -- NOT all of the pairs are in the 'Copied' state.
56 CLI_C_NONE_COPIED -- NONE of the pairs are in the 'Copied' state.
57 CLI_C_NOT_ALL_COPYONACCESS -- NOT all of the pairs are in the 'CopyonAccess' state.
58 CLI_C_NONE_COPYONACCESS -- NONE of the pairs are in the 'CopyonAccess' state.
59 CLI_C_CANT_RESTORE_PROTECT --The protected restore operation can not be completed because there are write pendings or the BCV mirrors are not synchronized.
60 CLI_C_NOT_ALL_CREATED -- NOT all of the pairs are in the 'Created' state.
61 CLI_C_NONE_CREATED -- NONE of the pairs are in the 'Created' state.
62 CLI_C_NOT_ALL_READY -- NOT all of the BCVs local mirrors are in the 'Ready' state.
63 CLI_C_NONE_READY -- NONE of the BCVs local mirrors are in the 'Ready' state.
64 CLI_C_STD_BKGRND_SPLIT_IN_PROG -- The operation cannot proceed because the STD Device is splitting in the Background.
65 CLI_C_SPLIT_IN_PROG -- The operation cannot proceed because the pair is splitting.
66 CLI_C_NOT_ALL_COPYONWRITE -- NOT all of the pairs are in the 'CopyOnWrite' state.
67 CLI_C_NONE_COPYONWRITE -- NONE of the pairs are in the 'CopyOnWrite' state.
68 CLI_C_NOT_ALL_RECREATED -- Not all devices are in the 'Recreated' state.
69 CLI_C_NONE_RECREATED -- No devices are in the 'Recreated' state.
70 CLI_C_NOT_ALL_CONSISTENT -- NOT all of the mirrored pairs are in the 'Consistent' state.
71 CLI_C_NONE_CONSISTENT-- NONE of the mirrored pairs are in the 'Consistent' state.
72 CLI_C_MAX_SESSIONS_EXCEEDED-- The maximum number of sessions has been exceeded for the specified device.
73 CLI_C_NOT_ALL_PRECOPY -- Not all source devices are in the 'Precopy' state.
74 CLI_C_NONE_PRECOPY -- No source devices are in the 'Precopy' state.
75 CLI_C_NOT_ALL_PRECOPY_CYCLED -- Not all source devices have completed one precopy cycle.
76 CLI_C_NONE_PRECOPY_CYCLED -- No source devices have completed one precopy cycle.
77 CLI_C_CONSISTENCY_TIMEOUT -- The operation failed because of a Consistency window timeout.
78 CLI_C_NOT_ALL_FAILED -- NOT all of the pairs are in the 'Failed' state.
79 CLI_C_NONE_FAILED -- NONE of the pairs are in the 'Failed' state.
80 CLI_C_CG_NOT_CONSISTENT -- CG is NOT RDF-consistent.
81 CLI_C_NOT_ALL_CREATEINPROG -- NOT all of the pairs are in the 'CreateInProg' state.
82 CLI_C_NONE_CREATEINPROG -- None of the pairs are in the 'CreateInProg' state.
83 CLI_C_NOT_ALL_RECREATEINPROG -- NOT all of the pairs are in the 'RecreateInProg' state.
84 CLI_C_NONE_RECREATEINPROG -- None of the pairs are in the 'RecreateInProg' state.
85 CLI_C_NOT_ALL_TERMINPROG -- NOT all of the pairs are in the 'TerminateInProg' state.
86 CLI_C_NONE_TERMINPROG -- None of the pairs are in the 'TerminateInProg' state.
87 CLI_C_NOT_ALL_VERIFYINPROG -- NOT all of the pairs are in the 'VerifyInProg' state.
88 CLI_C_NONE_VERIFYINPROG -- None of the pairs are in the 'VerifyInProg' state.
89 CLI_C_NOT_ALL_VERIFIED -- NOT all of the pairs are in the requested states.
90 CLI_C_NONE_VERIFIED -- NONE of the pairs are in the requested states Note: This message is returned when multiple states are verified at once.
91 CLI_C_RDFG_TRANSMIT_IDLE -- RDF group is operating in SRDF/A Transmit Idle.
92 CLI_C_NOT_ALL_MIGRATED -- Not all devices are in the ' Migrated' state.
93 CLI_C_NONE_MIGRATED -- None of devices are in the 'Migrated' state.
94 CLI_C_NOT_ALL_MIGRATEINPROG -- Not all devices are in the 'MigrateInProg' state.
95 CLI_C_NONE_MIGRATEINPROG -- None of devices are in the 'MigrateInProg' state.
96 CLI_C_NOT_ALL_INVALID-- Not all devices are in the 'Invalid' state.
97 CLI_C_NONE_INVALID-- None of devices are in the 'Invalid' state.

EMC introduced PowerPath Confgiuration Checker tools for customer. I thought to share about this. It will be very useful for those guys who are using PowerPath as a host fail-over software. It checks existing configuration with EMC support matrix and give you details reports about your configuration whether existing configuration is as EMC support guidelines. This tool is currently available for Windows Operating system.

It tests the following check:

· OS version verification
· Machine Architecture as per ESM(EMC Support Matrix)
· Powerpath Version
· Powerpath eFix
· Powerpath License
· License policy
· I/O timeout
· EOL and EOSL ( End of life and End of Service life)
· HBA Model
· HBA Driver
· HBA Firmware
· Symmetrix Microcode
· Symmetrix Model
· CLARiiON Fail-Over
· CLARiiON Flare Code
· CLARiiON Model
· Veritas DMP Version
· Powermt custom


PowerPath Configuration Checker (PPCC) is a software program that verifies that a host is configured to the hardware and software required for PowerPath multipathing features (failover and load-balancing functions, licensing, and policies)

PPCC can facilitate:
1) Successful PowerPath deployments prior to and after a PowerPath installation.
2) Customer self-service for:
• Planning installations on hosts where PowerPath is not installed.
• Upgrading an existing installation.
• Troubleshooting, for example after configuration changes are made on a host that includes PowerPath, such as the installation of new software

PPCC supports the following user tasks:

Planning — This task applies to a host on which PowerPath has never been installed or is not currently installed. PPCC can identify the software that needs to be installed to support a specific version of PowerPath. For example, PPCC can identify the HBA and driver version that can be installed to support a specific version of PowerPath.

Upgrade — This task applies to a host on which some version of PowerPath is installed. An upgrade (or downgrade) to a different version is required. PPCC can identify components of a configuration that need to change when a different version of PowerPath is to be installed. For example, PPCC can identify the
need to change the Storage OS version.

Diagnostic — This task applies to a host on which some version of PowerPath is installed or on which configuration changes have been made to PowerPath, to the host OS, and/or to other software on the host. This is the PPCC default mode.

For all of the listed tasks, PPCC can identify what changes to make to the PowerPath configuration to ensure continued support for failover and load balancing. Similarly, if PowerPath does not appear to be operating correctly, running EMC Reports and PPCC can assist with configuration problem analysis.

Any disk drive from any manufacturer can exhibit sector read errors due to media defects. This is a known and accepted reality in the disk drive industry, particularly with the high recording densities employed by recent products. These media defects only affect the drive’s ability to read data from a specific sector; they do not indicate general unreliability of the disk drive. The disk drives that EMC purchases from its vendors are within specifications for soft media errors according to the vendors as well as EMC’s own Supply Base Management organization.

Prior to shipment from manufacturing, disk drives have a surface scan operation performed that detects and reallocates any sectors that are defective. This operation is run to reduce the possibility that a disk drive will experience soft media errors in operation. Improper handling after leaving EMC manufacturing can lead to the creation of additional media defects, as can improper drive handling during installation or replacement.

When a disk drive encounters trouble reading data from a sector, the drive will automatically attempt recovery of the data through its various internal methods. Whether or not the drive is eventually successful at reading the sector, the drive will report the event to FLARE. FLARE will in turn log this event as a “Soft Media Error” (event code 820) and will re-allocate the sector to a spare physical location on the drive (this does not affect the logical address of the sector). In the event that the drive was eventually successful at reading the sector, (event coded 820 with sub-code of 22), FLARE will directly write that data into the new physical location. If the correct sector data was not available, (event code 820 with sub-code of 05). There are certain tools from EMC to verify disk and check detail about these Soft Media Errors like sniffer/FBI Tool/SMART Technology etc..

We have discussed about Virtual Provisioning of Symmetrix in previous post. Now, we will discuss about Virtual Provisioning Configuration. You have to understand your storage environment before you run the below mentioned command.

Configuring and viewing data devices and pools:

Data Devices are devices with datadev attribute. Only Data Devices can be part of Thin Pool. Devices with different protection scheme can be supported for use in Thin Pools. It is depending on specific Enginuity level. All devices with the datadev attribute are used for exclusively for populating Thin Pools.

Create command file (Thin.txt) with following syntax:

create dev count=10, config=2-Way-Mir, attribute=datadev, emulation=FBA, size=4602;

# symconfigure -sid 44 -file thin.txt commit –v –nop

A Configuration Change operation is in progress. Please wait...
Establishing a configuration change session...............Established.
Processing symmetrix 000190101244
{
create dev count=10, size=4602, emulation=FBA,
config=2-Way Mir, mvs_ssid=0000, attribute=datadev;
}
Performing Access checks..................................Allowed.
Checking Device Reservations..............................Allowed.
Submitting configuration changes..........................Submitted
…..
…..
…..
Step 125 of 173 steps.....................................Executing.
Step 130 of 173 steps.....................................Executing.
Local: COMMIT............................................Done.
Terminating the configuration change session..............Done.

The configuration change session has successfully completed.

# symdev list -sid 44 -datadev

Symmetrix ID: 000190101244
Device Name Directors Device
--------------------------- ------------- -------------------------------------
Sym Physical SA :P DA :IT Config Attribute Sts Cap(MB)
--------------------------- ------------- -------------------------------------
10C4 Not Visible ???:? 01A:C4 2-Way Mir N/A (DT) RW 4314
10C5 Not Visible ???:? 16C:D4 2-Way Mir N/A (DT) RW 4314
10C6 Not Visible ???:? 15B:D4 2-Way Mir N/A (DT) RW 4314
10C7 Not Visible ???:? 02D:C4 2-Way Mir N/A (DT) RW 4314
10C8 Not Visible ???:? 16A:D4 2-Way Mir N/A (DT) RW 4314
10C9 Not Visible ???:? 01C:C4 2-Way Mir N/A (DT) RW 4314
10CA Not Visible ???:? 16B:C4 2-Way Mir N/A (DT) RW 4314


Thin Pool can be created using symconfigure command and without adding data devices:

# symconfigure -sid 44 -cmd "create pool Storage type=thin;" commit –nop

Once pool is created, data devices can be added to the pool and enabled:

EMC announced Symmetrix V-Max recently which is based on virtual matrix. Symmetrix V-Max runs on latest Enginuity 5874. The 5874 plateform support Symmetricx V-Max Emulation level 121 and service processor level 102. The modular design of V-Max series Enginuity 5874 ensure flow and integrity between hardware component. Symmetrix Management Console 7.0 (SMC) integrated in service processor. SMC allows you to provision in 5 steps. Enginuity 5874 provides following enhanced feature:

RAID Virtual Architecture :-Enginuity 5874 introduces a new RAID implementation infrastructure. This enhancement increases configuration options in SRDF environments by reducing the number of mirror positions for RAID 1 and RAID 5 devices. This enhancement also provides additional configuration options, for example, allowing LUN migrations in a Concurrent or Cascaded SRDF environment. You can migrate device between raid level/tier level.

Large Volume :-Support Enginuity 5874 increases the maximum volume size to approximately 240 GB for open systems environments and 223 GB for mainframe environments. DMX-4 allows max only 65 GB hyper.

512 Hyper Volumes per Physical Drive :- Enginuity 5874 supports up to 512 hyper volumes on a single drive, twice as much as Enginuity 5773(DMX-3/4). You can improve flexibility and capacity utilization by configuring more granular volumes that more closely meet their space requirements and leave less space unused.

Autoprovisioning Groups :- Auto provisioning Groups reduce the complexity of Symmetrix device masking by allowing the creation of groups of host initiators, front-end ports and storage volumes. This provides the ability to mask storage to multiple paths instead of one path at a time, reducing the time required and potential for error for consolidated and virtualized server environments. You can script and schedule batch operation using SMC 7.0.

Concurrent Provisioning and Scripts :- Concurrent configuration changes provide the ability to run scripts concurrently instead of serially, improving system management efficiency. Uses for concurrent configuration changes include parallel device mapping, unmapping, metavolume form and dissolve from different hosts.

Dynamic Provisioning Enhancements :- Dynamic configuration changes allow the dynamic setting of the BCV and dynamic SRDF device attributes and decrease the impact to hosts I/O during the corresponding configuration manager operations.

New Management Integration :- With Enginuity 5874, the Symmetrix Management Console (SMC) and SMI-S provider are available on the Symmetrix system's Service Processor. This frees host resources and simplifies Symmetrix system management; by attaching the Service Processor to your network, you can open SMC and manage the Symmetrix system from anywhere in their enterprise.

Enhanced Virtual LUN :- With Enginuity 5874, Virtual LUN technology provides the ability to non disruptively change the physical location on disk, and/or the protection type of Symmetrix logical volumes and allows the migration of open systems, Mainframe and System i volumes to unallocated storage or to existing volumes. Organizations can respond more easily to changing business requirements when using tiered storage in the array.
Enhanced Virtual Provisioning Draining:- With Enginuity 5874, Virtual Provisioning support for draining of data devices allows the nondisruptive removal of one or more data devices from a thin device pool, without losing the data that belongs to the thin devices. This feature allows for improved capacity utilization.
Enhanced Virtual Provisioning:- Support for all RAID Types With Enginuity 5874, Virtual Provisioning no longer restricts RAID 5 data devices. Virtual Provisioning now supports all data device RAID types.

About Me

My photo
Sr. Solutions Architect; Expertise: - Cloud Design & Architect - Data Center Consolidation - DC/Storage Virtualization - Technology Refresh - Data Migration - SAN Refresh - Data Center Architecture More info:- diwakar@emcstorageinfo.com
Blog Disclaimer: “The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.”
EMC Storage Product Knowledge Sharing