Symmetrix Optimizer improves array performance by continuously monitoring access patterns and migrating devices to achieve balance across the disks in the array. This is process is carried out automatically based on user-defined parameters and is completely transparent to end users, hosts and applications in the environment. Migration is performed with constant data availability and consistent protection.

Optimizer performs self-tuning of Symmetrix data configurations from the Symmetrix service processor by:
· Analyzing statistics about Symmetrix logical device activity.
· Determining which logical devices should have their physical locations swapped to enhance Symmetrix performance.
· Swapping logical devices and their data using internal Dynamic Reallocation Volumes (DRVs) to hold customer data while reconfiguring the system (on a device-to-device basis).

Symmetrix Optimizer can be utilized via EMC Symmetrix Management Console or SYMCLI, where user can defines the following:

1) Symmetrix device to be optimized
2) Priority of those devices.
3) Window of time that profiles the business workload.
4) Window of time in which Optimizer is allowed to swap.
5) Additional business rules.
6) The pace of the Symmetrix Optimizer volume copy mechanism.

After being initialized with the user-defined parameters, Symmetrix Optimizer operates totally autonomously on the Symmetrix service processor to perform the following steps.

1) Symmetrix Optimizer builds a database of device activity statistics on the Symmetrix back end.

2) Using the data collected, configuration information and user-defined parameters, the Optimizer algorithm identifies busy and idle devices and their locations on the physical drives. The algorithm tries to minimize average disk service time by balancing I/O activity across physical disk by locating busy devices close to each other on the same disk, and by locating busy devices on faster areas of the disks. This is done by taking into account the speed of the disk, the disk geometry and the actuator speed.

3) Once a solution for load balancing has been developed the next phase to carry out the Symmetrix device swaps. This is don using established EMC Timefinder technology, which maintains data protection and availability. Users can specify if swaps should occur in completely automated fashion or if the user is required to approve Symmetrix device swaps before the action is taken.

4) Once the swap function is complete, Symmetrix Optimizer continues data analysis for the next swap.

How Symmetrix Optimizer works:-

1) Automatically collects logical device activity data, based upon the devices and time window you define.

2) Identifies “hot” and “cold” logical devices, and determines on which physical drives they reside.

3) Compares physical drive performance characteristics, such as spindle speed, head actuator speed, and drive geometry.

4) Determines which logical device swaps would reduce physical drive contention and minimize average disk service times.

5) Using the Optimizer Swap Wizard, swaps logical devices to balance activity across the back end of the Symmetrix array.

Optimizer is designed to run automatically in the background, analyzing performance in the performance time windows you specify and performing swaps in the swap time windows you specify.

A multipath requirement for different storage arrays:-
All storage arrays: - Write cache must be disabled if not battery backed.
Topology: - No single failure should cause both HBA and SP failover, especially with active-passive storage arrays.

IBM TotalStorage DS 400 Family (formely FastT) –

Defaul host type must be LNXCL or Host Type must be LNXCL or
AVT (Auto Volume Transfer) is disabled in this host mode.

HDS 99xx and 95xx family – HDS 9500V family (Thunder)- Requires two host modes:
Host mode 1 – standard
Host mode 2 – Sun Cluster
HDS 99xx family Lightning and HDS Tamba USP requires host mode set to Netware.

EMC Symmetrix :- Enable the SPC2 and SC3 settings.

EMC CLARiiON – All initiator records must have

- Fail-over Mode = 1
- Initiator Type = “CLARiiON Open”
- Array CommPath = “Enabled” or 1

HP EVA :- For EVA3000/5000 firmware 4.001 and above and EVA 4000/6000/8000 firmware 5.031 and above, set the host type to VMWare. Otherwise, set the host mode type to custom. The value is :
EVA3000/5000 firmware 3.x: 000000002200282E
EVA4000/6000/8000: 000000202200083E

HP XP:- For XP 128/1024/10000/12000, the host mode should be set to 0C (Windows), that is, zeroC (Windows).

NetApp :- No specific requirements

ESX Server Configuration :- Set the following Advanced Settings for the ESX Server host:-

Set Disk.UseLunReset to 1
Set Disk.UseDeviceReset to 0
A multipathing policy of Most Recently Used must be set for all LUNs hosting clustered disks for active-passive arrays. A multipathing policy of Most Recently Used or Fixed may be set for LUNs on active-active arrays. All FC HBAs must be of the same model.

Return code handling for Windows and UNIX The following lists the possible status or error codes that can be returned by the various SYMCLI commands on a Windows or UNIX platform and useful for troubleshooting.

Code Code symbol Description
___________________________________________________
0 CLI_C_SUCCESS CLI -- call completed successfully.
1 CLI_C_FAIL CLI - call failed.
2 CLI_C_DB_FILE_IS_LOCKED- Another process has an exclusive
lock on the Host database file.
3 CLI_C_SYM_IS_LOCKED - Another process has an exclusive
lock on the Symmetrix.
4 CLI_C_NOT_ALL_SYNCHRONIZED NOT - all of the mirrored pairs are in the 'Synchronized' state.
5 CLI_C_NONE_SYNCHRONIZED - NONE of the mirrored pairs are in the 'Synchronized' state.
6 CLI_C_NOT_ALL_UPDATED - - NOT all of the mirrored pairs are in the 'Updated' state.
7 CLI_C_NONE_UPDATED --NONE of the mirrored pairs are in the 'Updated' state.
8 CLI_C_NOT_ALL_PINGED -- NOT all of the remote Symmetrix units can be pinged.
9 CLI_C_NONE_PINGED -- NONE of the remote Symmetrix units can be pinged.
10 CLI_C_NOT_ALL_SYNCHED -- NOT all of the mirrored pairs are in the 'Synchronized' state.
11 CLI_C_NONE_SYNCHED -- NONE of the mirrored pairs are in the 'Synchronized' state.
12 CLI_C_NOT_ALL_RESTORED -- NOT all of the pairs are in the 'Restored' state.
13 CLI_C_NONE_RESTORED -- NONE of the pairs are in the 'Restored' state.
14 CLI_C_NOT_ALL_VALID -- NOT all of the mirrored pairs are in a valid state.
15 CLI_C_NONE_VALID -- NONE of the mirrored pairs are in a valid state.
16 CLI_C_SYM_NOT_ALL_LOCKED -- NOT all of the specified Symmetrix units have an exclusive Symmetrix lock.
17 CLI_C_SYM_NONE_LOCKED --NONE of the specified Symmetrix units have an exclusive Symmetrix lock.
18 CLI_C_ALREADY_IN_STATE --The Device(s) is (are) already in the desired state or mode.
19 CLI_C_GK_IS_LOCKED -- All GateKeeper devices to the Symmetrix unit are currently locked.
20 CLI_C_WP_TRACKS_IN_CACHE -- Operation cannot proceed because the target device has Write Pending I/O in the cache.
21 CLI_C_NEED_MERGE_TO_RESUME --Operation cannot proceed without first performing a merge of the RDF Track Tables.
22 CLI_C_NEED_FORCE_TO_PROCEED --Operation cannot proceed in the current state except if you specify a force flag.
23 CLI_C_NEED_SYMFORCE_TO_PROCEED --Operation cannot proceed in the current state except if you specify a symforce flag.
24 CLI_C_NOT_IN_SYNC -- The Symmetrix configuration and the database file are NOT in sync.
25 CLI_C_NOT_ALL_SPLIT -- NOT all of the mirrored pairs are in the 'Split' state.
26 CLI_C_NONE_SPLIT -- NONE of the mirrored pairs are in the 'Split' state.
27 CLI_C_NOT_ALL_SYNCINPROG -- NOT all of the mirrored pairs are in the 'SyncInProg' state.
28 CLI_C_NONE_SYNCINPROG -- NONE of the mirrored pairs are in the 'SyncInProg' state.
29 CLI_C_NOT_ALL_RESTINPROG -- NOT all of the pairs are in the 'RestInProg' state.
30 CLI_C_NONE_RESTINPROG -- NONE of the pairs are in the 'RestInProg' state.
31 CLI_C_NOT_ALL_SUSPENDED -- NOT all of the mirrored pairs are in the 'Suspended' state.
32 CLI_C_NONE_SUSPENDED -- NONE of the mirrored pairs are in the 'Suspended' state.
33 CLI_C_NOT_ALL_FAILED_OVER -- NOT all of the mirrored pairs are in the 'Failed Over' state.
34 CLI_C_NONE_FAILED_OVER -- NONE of the mirrored pairs are in the 'Failed Over' state.
35 CLI_C_NOT_ALL_UPDATEINPROG -- NOT all of the mirrored pairs are in the 'R1 UpdInProg' state.
36 CLI_C_NONE_UPDATEINPROG -- NONE of the mirrored pairs are in the 'R1 UpdInProg' state.
37 CLI_C_NOT_ALL_PARTITIONED -- NOT all of the mirrored pairs are in the 'Partitioned' state.
38 CLI_C_NONE_PARTITIONED -- NONE of the mirrored pairs are in the 'Partitioned' state.
39 CLI_C_NOT_ALL_ENABLED -- NOT all of the mirrored pairs are in the 'Enabled' consistency state.
40 CLI_C_NONE_ENABLED -- NONE of the mirrored pairs are in the 'Enabled' consistency state.
41 CLI_C_NOT_ALL_SYNCHRONIZED_AND_ENABLED -- NOT all of the mirrored pairs are in the 'Synchronized' rdf state and the 'Enabled' consistency state.
42 CLI_C_NONE_SYNCHRONIZED_AND_ENABLED -- NONE of the mirrored pairs are in the 'Synchronized' rdf state and in the 'Enabled' consistency state.
43 CLI_C_NOT_ALL_SUSP_AND_ENABLED -- NOT all of the mirrored pairs are in the 'Suspended' rdf state and 'Enabled' consistency state.
44 CLI_C_NONE_SUSP_AND_ENABLED -- NONE of the mirrored pairs are in the 'Suspended' rdf state and the 'Enabled' consistency state.
45 CLI_C_NOT_ALL_SUSP_AND_OFFLINE -- NOT all of the mirrored pairs are in the 'Suspended' rdf state and 'Offline' link suspend state.
46 CLI_C_NONE_SUSP_AND_OFFLINE -- NONE of the mirrored pairs are in the 'Suspended' rdf state and the 'Offline' link suspend state.
47 CLI_C_WONT_REVERSE_SPLIT -- Performing this operation at this time will not allow you to perform the next BCV split as a reverse split.
48 CLI_C_CONFIG_LOCKED -- Access to the configuration server is locked.
49 CLI_C_DEVS_ARE_LOCKED -- One or more devices are locked.
50 CLI_C_MUST_SPLIT_PROTECT -- If a device was restored with the protect option, it must be split with the protect option.
51 CLI_C_PAIRED_WITH_A_DRV -- The function can not be performed since the STD device is already paired with a DRV device.
52 CLI_C_PAIRED_WITH_A_SPARE -- NOT all of the Snap pairs are in the 'Copy in progress' state.
53 CLI_C_NOT_ALL_COPYINPROG -- NOT all of the pairs are in the 'CopyInProgress' state.
54 CLI_C_NONE_COPYINPROG --NONE of the pairs are in the 'CopyInProgress' state.
55 CLI_C_NOT_ALL_COPIED -- NOT all of the pairs are in the 'Copied' state.
56 CLI_C_NONE_COPIED -- NONE of the pairs are in the 'Copied' state.
57 CLI_C_NOT_ALL_COPYONACCESS -- NOT all of the pairs are in the 'CopyonAccess' state.
58 CLI_C_NONE_COPYONACCESS -- NONE of the pairs are in the 'CopyonAccess' state.
59 CLI_C_CANT_RESTORE_PROTECT --The protected restore operation can not be completed because there are write pendings or the BCV mirrors are not synchronized.
60 CLI_C_NOT_ALL_CREATED -- NOT all of the pairs are in the 'Created' state.
61 CLI_C_NONE_CREATED -- NONE of the pairs are in the 'Created' state.
62 CLI_C_NOT_ALL_READY -- NOT all of the BCVs local mirrors are in the 'Ready' state.
63 CLI_C_NONE_READY -- NONE of the BCVs local mirrors are in the 'Ready' state.
64 CLI_C_STD_BKGRND_SPLIT_IN_PROG -- The operation cannot proceed because the STD Device is splitting in the Background.
65 CLI_C_SPLIT_IN_PROG -- The operation cannot proceed because the pair is splitting.
66 CLI_C_NOT_ALL_COPYONWRITE -- NOT all of the pairs are in the 'CopyOnWrite' state.
67 CLI_C_NONE_COPYONWRITE -- NONE of the pairs are in the 'CopyOnWrite' state.
68 CLI_C_NOT_ALL_RECREATED -- Not all devices are in the 'Recreated' state.
69 CLI_C_NONE_RECREATED -- No devices are in the 'Recreated' state.
70 CLI_C_NOT_ALL_CONSISTENT -- NOT all of the mirrored pairs are in the 'Consistent' state.
71 CLI_C_NONE_CONSISTENT-- NONE of the mirrored pairs are in the 'Consistent' state.
72 CLI_C_MAX_SESSIONS_EXCEEDED-- The maximum number of sessions has been exceeded for the specified device.
73 CLI_C_NOT_ALL_PRECOPY -- Not all source devices are in the 'Precopy' state.
74 CLI_C_NONE_PRECOPY -- No source devices are in the 'Precopy' state.
75 CLI_C_NOT_ALL_PRECOPY_CYCLED -- Not all source devices have completed one precopy cycle.
76 CLI_C_NONE_PRECOPY_CYCLED -- No source devices have completed one precopy cycle.
77 CLI_C_CONSISTENCY_TIMEOUT -- The operation failed because of a Consistency window timeout.
78 CLI_C_NOT_ALL_FAILED -- NOT all of the pairs are in the 'Failed' state.
79 CLI_C_NONE_FAILED -- NONE of the pairs are in the 'Failed' state.
80 CLI_C_CG_NOT_CONSISTENT -- CG is NOT RDF-consistent.
81 CLI_C_NOT_ALL_CREATEINPROG -- NOT all of the pairs are in the 'CreateInProg' state.
82 CLI_C_NONE_CREATEINPROG -- None of the pairs are in the 'CreateInProg' state.
83 CLI_C_NOT_ALL_RECREATEINPROG -- NOT all of the pairs are in the 'RecreateInProg' state.
84 CLI_C_NONE_RECREATEINPROG -- None of the pairs are in the 'RecreateInProg' state.
85 CLI_C_NOT_ALL_TERMINPROG -- NOT all of the pairs are in the 'TerminateInProg' state.
86 CLI_C_NONE_TERMINPROG -- None of the pairs are in the 'TerminateInProg' state.
87 CLI_C_NOT_ALL_VERIFYINPROG -- NOT all of the pairs are in the 'VerifyInProg' state.
88 CLI_C_NONE_VERIFYINPROG -- None of the pairs are in the 'VerifyInProg' state.
89 CLI_C_NOT_ALL_VERIFIED -- NOT all of the pairs are in the requested states.
90 CLI_C_NONE_VERIFIED -- NONE of the pairs are in the requested states Note: This message is returned when multiple states are verified at once.
91 CLI_C_RDFG_TRANSMIT_IDLE -- RDF group is operating in SRDF/A Transmit Idle.
92 CLI_C_NOT_ALL_MIGRATED -- Not all devices are in the ' Migrated' state.
93 CLI_C_NONE_MIGRATED -- None of devices are in the 'Migrated' state.
94 CLI_C_NOT_ALL_MIGRATEINPROG -- Not all devices are in the 'MigrateInProg' state.
95 CLI_C_NONE_MIGRATEINPROG -- None of devices are in the 'MigrateInProg' state.
96 CLI_C_NOT_ALL_INVALID-- Not all devices are in the 'Invalid' state.
97 CLI_C_NONE_INVALID-- None of devices are in the 'Invalid' state.

EMC introduced PowerPath Confgiuration Checker tools for customer. I thought to share about this. It will be very useful for those guys who are using PowerPath as a host fail-over software. It checks existing configuration with EMC support matrix and give you details reports about your configuration whether existing configuration is as EMC support guidelines. This tool is currently available for Windows Operating system.

It tests the following check:

· OS version verification
· Machine Architecture as per ESM(EMC Support Matrix)
· Powerpath Version
· Powerpath eFix
· Powerpath License
· License policy
· I/O timeout
· EOL and EOSL ( End of life and End of Service life)
· HBA Model
· HBA Driver
· HBA Firmware
· Symmetrix Microcode
· Symmetrix Model
· CLARiiON Fail-Over
· CLARiiON Flare Code
· CLARiiON Model
· Veritas DMP Version
· Powermt custom


PowerPath Configuration Checker (PPCC) is a software program that verifies that a host is configured to the hardware and software required for PowerPath multipathing features (failover and load-balancing functions, licensing, and policies)

PPCC can facilitate:
1) Successful PowerPath deployments prior to and after a PowerPath installation.
2) Customer self-service for:
• Planning installations on hosts where PowerPath is not installed.
• Upgrading an existing installation.
• Troubleshooting, for example after configuration changes are made on a host that includes PowerPath, such as the installation of new software

PPCC supports the following user tasks:

Planning — This task applies to a host on which PowerPath has never been installed or is not currently installed. PPCC can identify the software that needs to be installed to support a specific version of PowerPath. For example, PPCC can identify the HBA and driver version that can be installed to support a specific version of PowerPath.

Upgrade — This task applies to a host on which some version of PowerPath is installed. An upgrade (or downgrade) to a different version is required. PPCC can identify components of a configuration that need to change when a different version of PowerPath is to be installed. For example, PPCC can identify the
need to change the Storage OS version.

Diagnostic — This task applies to a host on which some version of PowerPath is installed or on which configuration changes have been made to PowerPath, to the host OS, and/or to other software on the host. This is the PPCC default mode.

For all of the listed tasks, PPCC can identify what changes to make to the PowerPath configuration to ensure continued support for failover and load balancing. Similarly, if PowerPath does not appear to be operating correctly, running EMC Reports and PPCC can assist with configuration problem analysis.

Any disk drive from any manufacturer can exhibit sector read errors due to media defects. This is a known and accepted reality in the disk drive industry, particularly with the high recording densities employed by recent products. These media defects only affect the drive’s ability to read data from a specific sector; they do not indicate general unreliability of the disk drive. The disk drives that EMC purchases from its vendors are within specifications for soft media errors according to the vendors as well as EMC’s own Supply Base Management organization.

Prior to shipment from manufacturing, disk drives have a surface scan operation performed that detects and reallocates any sectors that are defective. This operation is run to reduce the possibility that a disk drive will experience soft media errors in operation. Improper handling after leaving EMC manufacturing can lead to the creation of additional media defects, as can improper drive handling during installation or replacement.

When a disk drive encounters trouble reading data from a sector, the drive will automatically attempt recovery of the data through its various internal methods. Whether or not the drive is eventually successful at reading the sector, the drive will report the event to FLARE. FLARE will in turn log this event as a “Soft Media Error” (event code 820) and will re-allocate the sector to a spare physical location on the drive (this does not affect the logical address of the sector). In the event that the drive was eventually successful at reading the sector, (event coded 820 with sub-code of 22), FLARE will directly write that data into the new physical location. If the correct sector data was not available, (event code 820 with sub-code of 05). There are certain tools from EMC to verify disk and check detail about these Soft Media Errors like sniffer/FBI Tool/SMART Technology etc..

We have discussed about Virtual Provisioning of Symmetrix in previous post. Now, we will discuss about Virtual Provisioning Configuration. You have to understand your storage environment before you run the below mentioned command.

Configuring and viewing data devices and pools:

Data Devices are devices with datadev attribute. Only Data Devices can be part of Thin Pool. Devices with different protection scheme can be supported for use in Thin Pools. It is depending on specific Enginuity level. All devices with the datadev attribute are used for exclusively for populating Thin Pools.

Create command file (Thin.txt) with following syntax:

create dev count=10, config=2-Way-Mir, attribute=datadev, emulation=FBA, size=4602;

# symconfigure -sid 44 -file thin.txt commit –v –nop

A Configuration Change operation is in progress. Please wait...
Establishing a configuration change session...............Established.
Processing symmetrix 000190101244
{
create dev count=10, size=4602, emulation=FBA,
config=2-Way Mir, mvs_ssid=0000, attribute=datadev;
}
Performing Access checks..................................Allowed.
Checking Device Reservations..............................Allowed.
Submitting configuration changes..........................Submitted
…..
…..
…..
Step 125 of 173 steps.....................................Executing.
Step 130 of 173 steps.....................................Executing.
Local: COMMIT............................................Done.
Terminating the configuration change session..............Done.

The configuration change session has successfully completed.

# symdev list -sid 44 -datadev

Symmetrix ID: 000190101244
Device Name Directors Device
--------------------------- ------------- -------------------------------------
Sym Physical SA :P DA :IT Config Attribute Sts Cap(MB)
--------------------------- ------------- -------------------------------------
10C4 Not Visible ???:? 01A:C4 2-Way Mir N/A (DT) RW 4314
10C5 Not Visible ???:? 16C:D4 2-Way Mir N/A (DT) RW 4314
10C6 Not Visible ???:? 15B:D4 2-Way Mir N/A (DT) RW 4314
10C7 Not Visible ???:? 02D:C4 2-Way Mir N/A (DT) RW 4314
10C8 Not Visible ???:? 16A:D4 2-Way Mir N/A (DT) RW 4314
10C9 Not Visible ???:? 01C:C4 2-Way Mir N/A (DT) RW 4314
10CA Not Visible ???:? 16B:C4 2-Way Mir N/A (DT) RW 4314


Thin Pool can be created using symconfigure command and without adding data devices:

# symconfigure -sid 44 -cmd "create pool Storage type=thin;" commit –nop

Once pool is created, data devices can be added to the pool and enabled:

EMC announced Symmetrix V-Max recently which is based on virtual matrix. Symmetrix V-Max runs on latest Enginuity 5874. The 5874 plateform support Symmetricx V-Max Emulation level 121 and service processor level 102. The modular design of V-Max series Enginuity 5874 ensure flow and integrity between hardware component. Symmetrix Management Console 7.0 (SMC) integrated in service processor. SMC allows you to provision in 5 steps. Enginuity 5874 provides following enhanced feature:

RAID Virtual Architecture :-Enginuity 5874 introduces a new RAID implementation infrastructure. This enhancement increases configuration options in SRDF environments by reducing the number of mirror positions for RAID 1 and RAID 5 devices. This enhancement also provides additional configuration options, for example, allowing LUN migrations in a Concurrent or Cascaded SRDF environment. You can migrate device between raid level/tier level.

Large Volume :-Support Enginuity 5874 increases the maximum volume size to approximately 240 GB for open systems environments and 223 GB for mainframe environments. DMX-4 allows max only 65 GB hyper.

512 Hyper Volumes per Physical Drive :- Enginuity 5874 supports up to 512 hyper volumes on a single drive, twice as much as Enginuity 5773(DMX-3/4). You can improve flexibility and capacity utilization by configuring more granular volumes that more closely meet their space requirements and leave less space unused.

Autoprovisioning Groups :- Auto provisioning Groups reduce the complexity of Symmetrix device masking by allowing the creation of groups of host initiators, front-end ports and storage volumes. This provides the ability to mask storage to multiple paths instead of one path at a time, reducing the time required and potential for error for consolidated and virtualized server environments. You can script and schedule batch operation using SMC 7.0.

Concurrent Provisioning and Scripts :- Concurrent configuration changes provide the ability to run scripts concurrently instead of serially, improving system management efficiency. Uses for concurrent configuration changes include parallel device mapping, unmapping, metavolume form and dissolve from different hosts.

Dynamic Provisioning Enhancements :- Dynamic configuration changes allow the dynamic setting of the BCV and dynamic SRDF device attributes and decrease the impact to hosts I/O during the corresponding configuration manager operations.

New Management Integration :- With Enginuity 5874, the Symmetrix Management Console (SMC) and SMI-S provider are available on the Symmetrix system's Service Processor. This frees host resources and simplifies Symmetrix system management; by attaching the Service Processor to your network, you can open SMC and manage the Symmetrix system from anywhere in their enterprise.

Enhanced Virtual LUN :- With Enginuity 5874, Virtual LUN technology provides the ability to non disruptively change the physical location on disk, and/or the protection type of Symmetrix logical volumes and allows the migration of open systems, Mainframe and System i volumes to unallocated storage or to existing volumes. Organizations can respond more easily to changing business requirements when using tiered storage in the array.
Enhanced Virtual Provisioning Draining:- With Enginuity 5874, Virtual Provisioning support for draining of data devices allows the nondisruptive removal of one or more data devices from a thin device pool, without losing the data that belongs to the thin devices. This feature allows for improved capacity utilization.
Enhanced Virtual Provisioning:- Support for all RAID Types With Enginuity 5874, Virtual Provisioning no longer restricts RAID 5 data devices. Virtual Provisioning now supports all data device RAID types.

About Me

My photo
Sr. Solutions Architect; Expertise: - Cloud Design & Architect - Data Center Consolidation - DC/Storage Virtualization - Technology Refresh - Data Migration - SAN Refresh - Data Center Architecture More info:- diwakar@emcstorageinfo.com
Blog Disclaimer: “The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.”
EMC Storage Product Knowledge Sharing