Showing posts with label Symmetrix. Show all posts
Showing posts with label Symmetrix. Show all posts

VMware hosts require few mandatory FA bits setting before SAN storage to be provision. Apart from FA bits a series of procedure require from installing HBAs, HBA Firmware and drivers, zoning, mapping, masking devices, to configure kernel files and devices.

Let’s assume we have already identified Symmetrix FA port for VMware host and completed zoning on switch. It is better to have separate FA pair for VMware host. (You can connect VMware host to 2 pair FA if you have enough FA resources available and going to deploy critical application which require more performance).

You can identify the FA port available on Symmetrix:

symcfg list –connections.


Verify port flag settings-

symcfg list –fa -p -v

( FA-Number and Port where your host connected/zoned)


The following FA bits/flag require being set/Enable:

                     i)    Common Serial Number (C)


                    ii)    VCM State (VCM) --- (ACLX for V-MAX)


                    iii)    SCSI 3 (SC3)


                    iv)    SPC 2


                    v)     Unique World Wide Name (UWWN)


                   vi)     Auto-negotiation (EAN)


                   vii)    Point to Point (P)

Note :- FA bit/flag requirement may vary depending on Symmetrix, but most of times you require to enable above bit for VMware host.

Create a command file for setting FA port flags, call it faflags.cmd with the below entry:

# For C-Bit

set port FA:Port Common_Serial_Number=enable;


set port FA:Port Common_Serial_Number=enable;

# For VCM-Bit

set port FA:Port VCM_State=enable;


set port FA:Port VCM_State=enable;

# For SC3-Bit

set port FA:Port scsi_3=enable;


set port FA:Port scsi_3=enable;

# For SP-2-Bit

set port FA:Port SPC2_Protocol_Version=enable;


set port FA:Port SPC2_Protocol_Version=enable;


# For UWWN-Bit

set port FA:Port Unique_WWN=enable;


set port FA:Port Unique_WWN=enable;

# For EAN-Bit

set port FA:Port Auto_Negotiate=enable;


set port FA:Port Auto_Negotiate=enable;

# For PTOP-Bit

set port FA:Port Init_Point_to_Point=enable;


set port FA:Port Init_Point_to_Point=enable;

Once you prepare command file, you can commit the file:

symconfigure –sid preview –f  faflags.cmd

Verify port flag settings once again, required FA flags should have be enabled by now-

symcfg list –fa -p -v

You are ready to provision SAN storage for VMware host now…


Lets understand EMC Open Replicator product:- Open Replicator enables remote point-in-time copies with full or incremental copy capabilities to be used for high-speed data mobility, remote vaulting, migration, and distribution between EMC Symmetrix DMX and other qualified storage arrays. OR leverages the high-end Symmetrix storage architecture and offers unmatched deployment flexibility and massive scalability.

EMC Open Replicator is tightly integrated with EMC TimeFinder and SRDF families of local and remote solutions. Open Replicator Functionality:

- Protect lower-tier applications at remote locations.

- Pushing or Pulling data from Symmetrix DMX arrays to other qualified storage arrays in SAN/WAN environments.

- Create remote point-in-time copies of local production volumes for many purposes from data vaulting to remote testing and development.

- Ensure immediate data availability to host applications via Symmetrix DMX consistency technologies.

When to use EMC Open Replicator for migration:


1) SYMAPI cannot validate third party or non-visible storage systems.
2) To protect against potential data loss due to a SAN failure or other connectivity issue during a hot pull operation, use the donor update option. When enabled, this feature causes all writes to the control device from the host to be immediately copied to the remote device as well. Because the data is fully copied to both the remote device and the control devices, if a failure occurs, the session can safely be terminated and created again to fully recover from any mid-copy failure.
3) Open Replicator uses FA resources. If you are using this utility in a production environment, verify with the SA that FA bandwidth assessments have been considered and that appropriate throttling parameters (pace or ceiling).
4) When using BCVs, the BCVs must be “visible” to the remote storage array. Thus, they must be mapped to an FA and the FA must be zoned to the destination storage. We highly recommend that BCVs not be mapped to the same FA as the control standard devices to avoid a negative impact on host I/O performance.
5) If a configuration uses thin device as destination in a pull or push copy operation, full volume allocation of the thin device will be made because Open Replicator creates a full volume copy.
6) When performing an Open Replicator migration, always use the –v qualifier on the create command. This insures that, should the session fail, there will be useful information returned as to what volume caused the error. This allows you to more quickly recognize zoning or masking issues.
7) Issuing create commands prior to Open Replicator migration activities allows confirmation that there will be no zoning or masking issues discovered during the migration window. This technique will only be successful if no changes have been made to the Symmetrix environment between issuance of the create and copy commands.
8) It is better and easier to use an Open Replicator management host for preparing, executing, and monitoring migration sessions than using one of the systems with volumes involved in the activity.
9) For Veritas file systems, PowerPath devices, and Oracle databases, when you activate the copy session devices must be frozen just before the activate is performed, and thawed as soon as the activate completes. Use the following options to with the symrcopy activate command, when applicable:
-vxfs MountPoint
-ppath srcdevs

PowerPath device

-rdb dbtype DbType -db DbName

10) The device specified in the command line must match the device in the device file or the activate will fail.

In the current storage markets and technology, storage tiers are defined by availability, functionality, performance and costs. In fact data can move up and down tiers as time and business required.

Tier "0" is not new in storage market but for implementation purposes it has been difficult to accommodate because it requires best performance and lowest latency. Enterprise Flash disks (Solid State Disks) capable to meet this requirement. It is possible to get more performance for company most critical applications. The performance can be gained through using Flash drives supported in VMAX and DMX-4 systems. One Flash drive can deliver IOPS equivalent to 30 15K RPM hard disk drives with approximately 1 ms application response time. Flash memory achieves performance and the lowest latency ever available in the enterprise class storage array.

Tier “0” application can be closely coupled with other storage tier within Symetrix series for consistency and efficiency, reducing cost of company for manual data layout or data migration from old disk to new high speed disk.

Tier “0” storage can be used to accelerate online transaction processing, accelerating performance with large indices and frequently accessed database tables i.e. Oracle, DB2 databases and SAP R/3. Tier 0 can also improve performance in batch processing and shorten batch processing in windows environments.

Tier “0” storage performance will help application that needs the lowest latency and response time possible. The following applications can get benefited through using Tier 0 storage:

- Algorithmic trading
- Data modeling
- Trade optimization
- Realtime data/feed processing
- Contextual web advertising
- Other realtime transaction systems
- Currency exchange and arbitrage

Tier “0” storage is most beneficial with random read miss application. If random read miss percentage is low, application will not see any performance difference since writes and sequential reads/writes already leverage Symmetrix cache to achieve the lowest possible response time.

For example, if the read hit percentage is high >90 % as compared to read misses, such application like DSS, Streaming media, improvements provided by Tier 0 storage will not likely be enough to be cost-effective.

Think when you are creating a point-in-time image for multiple devices. It is easy to create a point-in-time image of entire set of logical device at same time. In order achieve this you need to shut down an application so that no IO will occurs while you creating a point-in-time image. This is big problem in today’s environment where every company looking solution for zero down time.
The EMC provided solution to this problem is called “Enginuity Consistency Assist”. When you create a set of sessions and invoke Enginuity Consistency Assist, the Symmetrix aligns the I/O of those devices and halts all I/O from the host systems very briefly—much faster than the applications can detect—while it creates the session. It then resumes normal operation without any application impact.
TimeFinder Consistent Split using (TimeFinder/Consistency Groups) allows the splitting off of a consistent, re-startable image of an Oracle database instance within seconds with no interruption to the online Oracle database instance.
Ÿ - Allows users to split off a dependent write consistent, re-startable image of application without interrupting online services
Ÿ - Using TimeFinder/Consistency Groups to defer write I/O at the Symmetrix before a split
Ÿ - Consistent split can be performed by any host running Solutions Enabler connected to the Symmetrix
Ÿ - Tested and available including HP-UX, Solaris, AIX, Linux, and Windows
Ÿ - No database shutdown or requirement to have database put into backup mode (Oracle).

Using TF/CG, consistent splits helps to avoid inconsistencies and restart problems that can occur with using Oracle hot-backup mode (not quiescing the database).
The major benefits of TF/CG are:
• No disruption to the online Oracle database to obtain a Point-in-Time image
• Provides a consistent, re-startable image of the Oracle database for testing new versions or database patch updates before deploying for use in production environments
• Can be used to obtain a business point of consistency for business restart requirements for which Oracle has been identified as one of multiple databases for such an environment.

The same benefits apply using TF/CG in a clustered environment as in a non-clustered environment:
- No disruption to the online Oracle database to obtain a Point-in-Time image in a Oracle single instance environment or when using Oracle Real Application Clusters
- Provides a consistent, re-startable image of the Oracle database for testing new versions or database patch updates before deploying for use in clustered production environments
- Can be used to obtain a business point of consistency for business restart requirements for which Oracle has been identified as one of multiple databases for such an environment.

Auto-provisioning requires Enginuity 5874 or later. It simplify Symmetrix provisioning by allowing you to create group of devices like storage group in CLARiiON, Front-End Port Group and Host Initiators Group and then associate these groups with each other in a masking view.

The following are the basic steps for provisioning Symmetrix using Auto-Provisioning:-

1) Create a Storage Group
2) Create a Port Group
3) Create an Initiator Group
4) Associate the groups in a Masking View.


Creating Storage Group:- It is component of Auto-Provisioning group and FAST ( Will discuss about FAST in later post), both require Enginiuity 5874. The maximum number of storage group allowed per array is 8192. A storage group can contain up to 4096 devices. A Symmetrix device can belong to more than one storage group.

Note:- By default Dynamic LUN addresses will assigned to each device. If can manually assign the host LUN addresses for the device you are adding to the group by clicking Set LUN Address- Storage group dialog box.

Creating Port Group:- A port can belong to more than one port group and port must have the ACLX bit enabled. For example if you want FA 5A and 12 A for windows operating system, you can create port group name called WIN_PortGrp or Win_FA5A_FA12A_PrtGrp etc.

Creating Initiator Group:- The maximum number of initiator groups allowed per Symmetrix array is 8000. An initiator group can contain up to 32 initiator of any type and contain other initiator groups (cascaded to only one level).

Initiator Group name must be unique from other initiator groups on the array and cannot exceed 64 characters. Initiator group names are case-insensitive.

Creating Masking view:- It just a co-relation between Storage Group, Port Group and Initiator Group and you are done! Device will be mapped automatically to selected port group and masked to selected initiator groups.

SRDF Pair Status

Posted by Diwakar ADD COMMENTS

SRDF/S and SRDF/A configuration involves tasks such as suspending and resuming the replication, failover from R1 side to R2, restoring R1 or R2 volumes from their BCV, and more. You perform these and other SRDF/S or SRDF/A operations using both symrdf and TimeFinder command symmir. The below details are for SRDF Pair states during SRDF procedure.

SyncInProg :- A synchronization is currently in progress between the R1 and the R2. There are existing invalid tracks between the two pairs and the logical link between both sides of an RDF pair is up.

Synchronized :- The R1 and the R2 are currently in a synchronized state. The same content exists on the R2 as the R1. There are no invalid tracks between the two pairs.

Split :- The R1 and the R2 are currently Ready to their hosts, but the link is Not Ready or Write Disabled.

Failed Over :- The R1 is currently Not Ready or Write Disabled and operations been failed over to the R2.

R1 Updated :- The R1 is currently Not Ready or Write Disabled to the host, there are no local invalid tracks on the R1 side, and the link is Ready or Write Disabled.

R1 UpdInProg :- The R1 is currently Not Ready or Write Disabled to the host, there are invalid local (R1) tracks on the source side, and the link is Ready or Write Disabled.
Suspended :- The RDF links have been suspended and are Not Ready or Write Disabled. If the R1 is Ready while the links are suspended, any I/O accumulates as invalid tracks owed to the R2.

Partitioned :- The SYMAPI is currently unable to communicate through the corresponding RDF path to the remote Symmetrix. Partitioned may apply to devices within an RA group. For example, if SYMAPI is unable to communicate to a remote Symmetrix via an RA group, devices in that RA group are marked as being in the Partitioned state.

Mixed :- Mixed is a composite SYMAPI device group RDF pair state. Different SRDF pair states exist within a device group.

Invalid :- This is the default state when no other SRDF state applies. The combination of R1, R2, and RDF link states and statuses do not match any other pair state. This state may occur if there is a problem at the disk director level.

Consistent :- The R2 SRDF/A capable devices are in a consistent state. Consistent state signifies the normal state of operation for device pairs operating in asynchronous mode.

Symmetrix Optimizer improves array performance by continuously monitoring access patterns and migrating devices to achieve balance across the disks in the array. This is process is carried out automatically based on user-defined parameters and is completely transparent to end users, hosts and applications in the environment. Migration is performed with constant data availability and consistent protection.

Optimizer performs self-tuning of Symmetrix data configurations from the Symmetrix service processor by:
· Analyzing statistics about Symmetrix logical device activity.
· Determining which logical devices should have their physical locations swapped to enhance Symmetrix performance.
· Swapping logical devices and their data using internal Dynamic Reallocation Volumes (DRVs) to hold customer data while reconfiguring the system (on a device-to-device basis).

Symmetrix Optimizer can be utilized via EMC Symmetrix Management Console or SYMCLI, where user can defines the following:

1) Symmetrix device to be optimized
2) Priority of those devices.
3) Window of time that profiles the business workload.
4) Window of time in which Optimizer is allowed to swap.
5) Additional business rules.
6) The pace of the Symmetrix Optimizer volume copy mechanism.

After being initialized with the user-defined parameters, Symmetrix Optimizer operates totally autonomously on the Symmetrix service processor to perform the following steps.

1) Symmetrix Optimizer builds a database of device activity statistics on the Symmetrix back end.

2) Using the data collected, configuration information and user-defined parameters, the Optimizer algorithm identifies busy and idle devices and their locations on the physical drives. The algorithm tries to minimize average disk service time by balancing I/O activity across physical disk by locating busy devices close to each other on the same disk, and by locating busy devices on faster areas of the disks. This is done by taking into account the speed of the disk, the disk geometry and the actuator speed.

3) Once a solution for load balancing has been developed the next phase to carry out the Symmetrix device swaps. This is don using established EMC Timefinder technology, which maintains data protection and availability. Users can specify if swaps should occur in completely automated fashion or if the user is required to approve Symmetrix device swaps before the action is taken.

4) Once the swap function is complete, Symmetrix Optimizer continues data analysis for the next swap.

How Symmetrix Optimizer works:-

1) Automatically collects logical device activity data, based upon the devices and time window you define.

2) Identifies “hot” and “cold” logical devices, and determines on which physical drives they reside.

3) Compares physical drive performance characteristics, such as spindle speed, head actuator speed, and drive geometry.

4) Determines which logical device swaps would reduce physical drive contention and minimize average disk service times.

5) Using the Optimizer Swap Wizard, swaps logical devices to balance activity across the back end of the Symmetrix array.

Optimizer is designed to run automatically in the background, analyzing performance in the performance time windows you specify and performing swaps in the swap time windows you specify.

Return code handling for Windows and UNIX The following lists the possible status or error codes that can be returned by the various SYMCLI commands on a Windows or UNIX platform and useful for troubleshooting.

Code Code symbol Description
___________________________________________________
0 CLI_C_SUCCESS CLI -- call completed successfully.
1 CLI_C_FAIL CLI - call failed.
2 CLI_C_DB_FILE_IS_LOCKED- Another process has an exclusive
lock on the Host database file.
3 CLI_C_SYM_IS_LOCKED - Another process has an exclusive
lock on the Symmetrix.
4 CLI_C_NOT_ALL_SYNCHRONIZED NOT - all of the mirrored pairs are in the 'Synchronized' state.
5 CLI_C_NONE_SYNCHRONIZED - NONE of the mirrored pairs are in the 'Synchronized' state.
6 CLI_C_NOT_ALL_UPDATED - - NOT all of the mirrored pairs are in the 'Updated' state.
7 CLI_C_NONE_UPDATED --NONE of the mirrored pairs are in the 'Updated' state.
8 CLI_C_NOT_ALL_PINGED -- NOT all of the remote Symmetrix units can be pinged.
9 CLI_C_NONE_PINGED -- NONE of the remote Symmetrix units can be pinged.
10 CLI_C_NOT_ALL_SYNCHED -- NOT all of the mirrored pairs are in the 'Synchronized' state.
11 CLI_C_NONE_SYNCHED -- NONE of the mirrored pairs are in the 'Synchronized' state.
12 CLI_C_NOT_ALL_RESTORED -- NOT all of the pairs are in the 'Restored' state.
13 CLI_C_NONE_RESTORED -- NONE of the pairs are in the 'Restored' state.
14 CLI_C_NOT_ALL_VALID -- NOT all of the mirrored pairs are in a valid state.
15 CLI_C_NONE_VALID -- NONE of the mirrored pairs are in a valid state.
16 CLI_C_SYM_NOT_ALL_LOCKED -- NOT all of the specified Symmetrix units have an exclusive Symmetrix lock.
17 CLI_C_SYM_NONE_LOCKED --NONE of the specified Symmetrix units have an exclusive Symmetrix lock.
18 CLI_C_ALREADY_IN_STATE --The Device(s) is (are) already in the desired state or mode.
19 CLI_C_GK_IS_LOCKED -- All GateKeeper devices to the Symmetrix unit are currently locked.
20 CLI_C_WP_TRACKS_IN_CACHE -- Operation cannot proceed because the target device has Write Pending I/O in the cache.
21 CLI_C_NEED_MERGE_TO_RESUME --Operation cannot proceed without first performing a merge of the RDF Track Tables.
22 CLI_C_NEED_FORCE_TO_PROCEED --Operation cannot proceed in the current state except if you specify a force flag.
23 CLI_C_NEED_SYMFORCE_TO_PROCEED --Operation cannot proceed in the current state except if you specify a symforce flag.
24 CLI_C_NOT_IN_SYNC -- The Symmetrix configuration and the database file are NOT in sync.
25 CLI_C_NOT_ALL_SPLIT -- NOT all of the mirrored pairs are in the 'Split' state.
26 CLI_C_NONE_SPLIT -- NONE of the mirrored pairs are in the 'Split' state.
27 CLI_C_NOT_ALL_SYNCINPROG -- NOT all of the mirrored pairs are in the 'SyncInProg' state.
28 CLI_C_NONE_SYNCINPROG -- NONE of the mirrored pairs are in the 'SyncInProg' state.
29 CLI_C_NOT_ALL_RESTINPROG -- NOT all of the pairs are in the 'RestInProg' state.
30 CLI_C_NONE_RESTINPROG -- NONE of the pairs are in the 'RestInProg' state.
31 CLI_C_NOT_ALL_SUSPENDED -- NOT all of the mirrored pairs are in the 'Suspended' state.
32 CLI_C_NONE_SUSPENDED -- NONE of the mirrored pairs are in the 'Suspended' state.
33 CLI_C_NOT_ALL_FAILED_OVER -- NOT all of the mirrored pairs are in the 'Failed Over' state.
34 CLI_C_NONE_FAILED_OVER -- NONE of the mirrored pairs are in the 'Failed Over' state.
35 CLI_C_NOT_ALL_UPDATEINPROG -- NOT all of the mirrored pairs are in the 'R1 UpdInProg' state.
36 CLI_C_NONE_UPDATEINPROG -- NONE of the mirrored pairs are in the 'R1 UpdInProg' state.
37 CLI_C_NOT_ALL_PARTITIONED -- NOT all of the mirrored pairs are in the 'Partitioned' state.
38 CLI_C_NONE_PARTITIONED -- NONE of the mirrored pairs are in the 'Partitioned' state.
39 CLI_C_NOT_ALL_ENABLED -- NOT all of the mirrored pairs are in the 'Enabled' consistency state.
40 CLI_C_NONE_ENABLED -- NONE of the mirrored pairs are in the 'Enabled' consistency state.
41 CLI_C_NOT_ALL_SYNCHRONIZED_AND_ENABLED -- NOT all of the mirrored pairs are in the 'Synchronized' rdf state and the 'Enabled' consistency state.
42 CLI_C_NONE_SYNCHRONIZED_AND_ENABLED -- NONE of the mirrored pairs are in the 'Synchronized' rdf state and in the 'Enabled' consistency state.
43 CLI_C_NOT_ALL_SUSP_AND_ENABLED -- NOT all of the mirrored pairs are in the 'Suspended' rdf state and 'Enabled' consistency state.
44 CLI_C_NONE_SUSP_AND_ENABLED -- NONE of the mirrored pairs are in the 'Suspended' rdf state and the 'Enabled' consistency state.
45 CLI_C_NOT_ALL_SUSP_AND_OFFLINE -- NOT all of the mirrored pairs are in the 'Suspended' rdf state and 'Offline' link suspend state.
46 CLI_C_NONE_SUSP_AND_OFFLINE -- NONE of the mirrored pairs are in the 'Suspended' rdf state and the 'Offline' link suspend state.
47 CLI_C_WONT_REVERSE_SPLIT -- Performing this operation at this time will not allow you to perform the next BCV split as a reverse split.
48 CLI_C_CONFIG_LOCKED -- Access to the configuration server is locked.
49 CLI_C_DEVS_ARE_LOCKED -- One or more devices are locked.
50 CLI_C_MUST_SPLIT_PROTECT -- If a device was restored with the protect option, it must be split with the protect option.
51 CLI_C_PAIRED_WITH_A_DRV -- The function can not be performed since the STD device is already paired with a DRV device.
52 CLI_C_PAIRED_WITH_A_SPARE -- NOT all of the Snap pairs are in the 'Copy in progress' state.
53 CLI_C_NOT_ALL_COPYINPROG -- NOT all of the pairs are in the 'CopyInProgress' state.
54 CLI_C_NONE_COPYINPROG --NONE of the pairs are in the 'CopyInProgress' state.
55 CLI_C_NOT_ALL_COPIED -- NOT all of the pairs are in the 'Copied' state.
56 CLI_C_NONE_COPIED -- NONE of the pairs are in the 'Copied' state.
57 CLI_C_NOT_ALL_COPYONACCESS -- NOT all of the pairs are in the 'CopyonAccess' state.
58 CLI_C_NONE_COPYONACCESS -- NONE of the pairs are in the 'CopyonAccess' state.
59 CLI_C_CANT_RESTORE_PROTECT --The protected restore operation can not be completed because there are write pendings or the BCV mirrors are not synchronized.
60 CLI_C_NOT_ALL_CREATED -- NOT all of the pairs are in the 'Created' state.
61 CLI_C_NONE_CREATED -- NONE of the pairs are in the 'Created' state.
62 CLI_C_NOT_ALL_READY -- NOT all of the BCVs local mirrors are in the 'Ready' state.
63 CLI_C_NONE_READY -- NONE of the BCVs local mirrors are in the 'Ready' state.
64 CLI_C_STD_BKGRND_SPLIT_IN_PROG -- The operation cannot proceed because the STD Device is splitting in the Background.
65 CLI_C_SPLIT_IN_PROG -- The operation cannot proceed because the pair is splitting.
66 CLI_C_NOT_ALL_COPYONWRITE -- NOT all of the pairs are in the 'CopyOnWrite' state.
67 CLI_C_NONE_COPYONWRITE -- NONE of the pairs are in the 'CopyOnWrite' state.
68 CLI_C_NOT_ALL_RECREATED -- Not all devices are in the 'Recreated' state.
69 CLI_C_NONE_RECREATED -- No devices are in the 'Recreated' state.
70 CLI_C_NOT_ALL_CONSISTENT -- NOT all of the mirrored pairs are in the 'Consistent' state.
71 CLI_C_NONE_CONSISTENT-- NONE of the mirrored pairs are in the 'Consistent' state.
72 CLI_C_MAX_SESSIONS_EXCEEDED-- The maximum number of sessions has been exceeded for the specified device.
73 CLI_C_NOT_ALL_PRECOPY -- Not all source devices are in the 'Precopy' state.
74 CLI_C_NONE_PRECOPY -- No source devices are in the 'Precopy' state.
75 CLI_C_NOT_ALL_PRECOPY_CYCLED -- Not all source devices have completed one precopy cycle.
76 CLI_C_NONE_PRECOPY_CYCLED -- No source devices have completed one precopy cycle.
77 CLI_C_CONSISTENCY_TIMEOUT -- The operation failed because of a Consistency window timeout.
78 CLI_C_NOT_ALL_FAILED -- NOT all of the pairs are in the 'Failed' state.
79 CLI_C_NONE_FAILED -- NONE of the pairs are in the 'Failed' state.
80 CLI_C_CG_NOT_CONSISTENT -- CG is NOT RDF-consistent.
81 CLI_C_NOT_ALL_CREATEINPROG -- NOT all of the pairs are in the 'CreateInProg' state.
82 CLI_C_NONE_CREATEINPROG -- None of the pairs are in the 'CreateInProg' state.
83 CLI_C_NOT_ALL_RECREATEINPROG -- NOT all of the pairs are in the 'RecreateInProg' state.
84 CLI_C_NONE_RECREATEINPROG -- None of the pairs are in the 'RecreateInProg' state.
85 CLI_C_NOT_ALL_TERMINPROG -- NOT all of the pairs are in the 'TerminateInProg' state.
86 CLI_C_NONE_TERMINPROG -- None of the pairs are in the 'TerminateInProg' state.
87 CLI_C_NOT_ALL_VERIFYINPROG -- NOT all of the pairs are in the 'VerifyInProg' state.
88 CLI_C_NONE_VERIFYINPROG -- None of the pairs are in the 'VerifyInProg' state.
89 CLI_C_NOT_ALL_VERIFIED -- NOT all of the pairs are in the requested states.
90 CLI_C_NONE_VERIFIED -- NONE of the pairs are in the requested states Note: This message is returned when multiple states are verified at once.
91 CLI_C_RDFG_TRANSMIT_IDLE -- RDF group is operating in SRDF/A Transmit Idle.
92 CLI_C_NOT_ALL_MIGRATED -- Not all devices are in the ' Migrated' state.
93 CLI_C_NONE_MIGRATED -- None of devices are in the 'Migrated' state.
94 CLI_C_NOT_ALL_MIGRATEINPROG -- Not all devices are in the 'MigrateInProg' state.
95 CLI_C_NONE_MIGRATEINPROG -- None of devices are in the 'MigrateInProg' state.
96 CLI_C_NOT_ALL_INVALID-- Not all devices are in the 'Invalid' state.
97 CLI_C_NONE_INVALID-- None of devices are in the 'Invalid' state.

Any disk drive from any manufacturer can exhibit sector read errors due to media defects. This is a known and accepted reality in the disk drive industry, particularly with the high recording densities employed by recent products. These media defects only affect the drive’s ability to read data from a specific sector; they do not indicate general unreliability of the disk drive. The disk drives that EMC purchases from its vendors are within specifications for soft media errors according to the vendors as well as EMC’s own Supply Base Management organization.

Prior to shipment from manufacturing, disk drives have a surface scan operation performed that detects and reallocates any sectors that are defective. This operation is run to reduce the possibility that a disk drive will experience soft media errors in operation. Improper handling after leaving EMC manufacturing can lead to the creation of additional media defects, as can improper drive handling during installation or replacement.

When a disk drive encounters trouble reading data from a sector, the drive will automatically attempt recovery of the data through its various internal methods. Whether or not the drive is eventually successful at reading the sector, the drive will report the event to FLARE. FLARE will in turn log this event as a “Soft Media Error” (event code 820) and will re-allocate the sector to a spare physical location on the drive (this does not affect the logical address of the sector). In the event that the drive was eventually successful at reading the sector, (event coded 820 with sub-code of 22), FLARE will directly write that data into the new physical location. If the correct sector data was not available, (event code 820 with sub-code of 05). There are certain tools from EMC to verify disk and check detail about these Soft Media Errors like sniffer/FBI Tool/SMART Technology etc..

Organizations continually search for ways to both simplify storage management processes and improve storage capacity utilization. Several products have been released over the past few years that promise efficient use of storage space. One of the technologies that is quickly catching up is thin provisioning. 3PAR was one of the first vendors to introduce the concept while the rest quickly followed the suite.

When provisioning storage for a new application, administrators must consider that application’s future capacity requirements rather than simply its current requirements. In order to reduce the risk that storage capacity will be exhausted, disrupting application and business processes, organizations often have allocated more physical storage to an application than is needed for a significant amount of time. This allocated but unused storage introduces operational costs. Even with the most careful planning, it often is necessary to provision additional storage in the future, which could potentially require an application outage.

EMC Virtual Provisioning: - introduced with Enginuity 5773, addresses some of these challenges. It builds on the base “thin provisioning” functionality, which is the ability to have a large “thin” device (volume) configured and presented to the host while consuming physical storage from a shared pool only as needed. Symmetrix Virtual Provisioning can improve storage capacity utilization and simplify storage management by presenting the application with sufficient capacity for an extended period of time, reducing the need to provision new storage frequently and avoiding costly allocated but unused storage. Symmetrix Management Console and the command line interface (CLI) are the primary management and monitoring tools.
Symmetrix Virtual Provisioning: - introduces a new type of host accessible device called a “thin device” that can be used in the same way that a regular device has traditionally been used. Unlike regular Symmetrix devices, thin devices do not need to have physical storage completely allocated at the time the devices are presented to a host. The physical storage that is used to supply disk space for a thin device comes from a shared thin storage pool that has been associated with the thin device. A thin storage pool is comprised of a new type of internal Symmetrix device called a data device that is dedicated to the purpose of providing the actual physical storage used by thin devices. When they are first created, thin devices are not associated with any particular thin pool. An operation referred to as “binding” must be performed to associate a thin device with a thin pool.

When a write is performed to a portion of the thin device, the Symmetrix allocates a minimum allotment of physical storage from the pool and maps that storage to a region of the thin device. The storage allocation operations are performed in small units of storage called “thin device extents.” A round-robin mechanism is used to balance the allocation of data device extents across all of the data devices in the pool that are enabled and that have remaining unused capacity. The thin device extent size is 12 tracks (768 KB). That means that the initial bind of a thin device to a pool causes one thin device extent, or 12 tracks, to be allocated per thin device. So a four-member thin meta would cause 48 tracks (3078 KB) to be allocated when the device is bound to a thin pool.

When a read is performed on a thin device, the data being read is retrieved from the appropriate data device in the storage pool to which the thin device is bound. When more storage is required to service existing or future thin devices, data devices can be added to existing thin storage pools. New thin devices can also be created and associated with existing thin pools.
It is possible for a thin device to be presented for host-use before all of the reported capacity of the device has been mapped. It is also possible for the sum of the reported capacities of the thin devices using a given pool to exceed the available storage capacity of the pool. Such a thin device configuration is said to be oversubscribed.

The storage is allocated from the pool using a round-robin approach that tends to stripe the data devices in the pool. Storage Admin should keep in mind that when implementing Virtual Provisioning, it is important that realistic utilization objectives are set. Generally, organizations should target no higher than 60 percent to 80 percent capacity utilization per pool. A buffer should be provided for unexpected growth or a “runaway” application that consumes more physical capacity than was originally

Benefits of Virtual Provisioning :
Less expensive to pre-provision storage
In case one needs to preprovision storage, the entire amount of physical storage has to be configured and dedicated at the time of pre provision.
But in case of thin luns, one can exceed the amount of physical storage during provisioning. Also with time, as costs of physical storage drops consistently, it could save dollars.

Easy implementation of wide stripes
A configured thin pool ensures that a thin device will be widely stripped across the backend in 768K extends. Thus a single thin device requires no planning on part of administrator.

Performance
The performance for certain random IO workloads can be improved due to the fact that thin devices are widely stripped across the backend. Typically in a thin device implementation there is a modest response time overhead incurred the first time a write is performed on an unallocated region of a thin device. This overhead tends to disappear once the working set of thin device has been written to.
--Contributed by Suraj Kawlekar

There is available WWN decoder tool for EMC but I am going to discuss how to decode manually?
Each Symmetrix SAF port, RAF port, EF ficon port or DAF port (DMX only) has a unique worldwide name (WWN). The WWN is associated with the Tachyon chip on the director. It was intended to remain unique per director so that the director can be accessed on a storage area network. The Symmetrix SAF/RAF/DAF/EF WWN is dependent on the Symmetrix serial number, the director number, the processor letter, and the port on the processor. When the SAF/RAF/DAF is inserted into the Symmetrix, it discovers the Symmetrix serial number and slot number and the WWNs are set for the ports on the director.

Symm 4/4.8/5 (2-port or 4-port) Fibre Channel front directors, the WWN breakdown are as follows:

The director WWN (50060482B82F9654) can be broken down (in binary) as follows:

First 28 Bits (from the left, bits 63-36, binary) of WWN are assigned by the IEEE (5006048, the vendor ID for EMC Symmetrix)

5006048 2 B 8 2 F 9 6 5 4
0010 1011 1000 0010 1111 1001 0110 0101 0100

0 A E 0 B E 5 9 -----------------------> AE0BE59 hex = 182500953 Symm S/N

Bits 36 through 6 represent the Symmetrix serial number; the decode starts at bit 6 and works up to 36 to create the serial number. This is broken down as illustrated above.

The least signifigant 6 bits (bits 5 through 0) can be decoded to obtain the Symmetrix director number, processor and port. Bit 5 is used to designate the port on the processor (0 for A, 1 for B). Bit 4, known as the side bit, is used to designate the processor (0 for A, 1 for B). The least signifigant 4 bits, 3 through 0, represent the Symm slot number.


01 0100 = 14 hex -----> director 5b port A

In review, this WWN represents EMC Symmetrix serial number 182500953, director 5b port A

For Symm DMX product family (DMX-1/2/3), the WWN breakdown are as follows:

The director WWN (5006048ACCC86A32) can be broken down (in binary) as follows:

Again, like Symm 4/5, the first 28 bits (63-36) are assigned by the IEEE

5006048 A C C C 8 6 A 3 2

1010 1100 1100 1100 1000 0110 1010 0011 0010

B 3 3 2 1 A 8 ----------------------> B3321A8 hex = 187900328 Symm S/N

Bit 35 is now known as the 'Half' bit and is now used to decode which half the processor/port lie on the board.

Bits 34 through 6 represent the serial number; the decode starts at bit 6 and works up to bit 34 to create the serial number. This is broken down as illustrated above.

In conjunction with bit 35, the last 6 bits of the WWN represent the director number, processor and port. Bit 35, the 'Half' bit, represents either processor A and B, or C and D (0 for A and B, 1 for C and D). Bit 5 again represents the port on the processor (0 for A, 1 for B). Bit 4, the side bit, again represents the processor but with a slight change (if 0 then port A or C, if 1 then port B or D, depending on what the half bit is set to). The last 4 bits, 3 through 0, represent the Symm slot number.

1 11 0010 -------> half bit = 1 (either processor C or D), port bit = 1 (port B), side bit = 1 (because half = 1, looking at C and D processors only, side = 1 now means processor D)
0010 hex = 2 decimal (slot 2 or director 3)

In review, the WWN of 5006048ACCC86A32 represents EMC Symmetrix serial number 187900328, director 3d port B


Mount EMC BCVs at the same host

Example:
I have created a volumegroup, a logical volume, afilesystem and a file on two EMC standard volumes.(For this test you need to have two hdisks hdisk and hdisk andtwo BCVs dev and available)
# mkvg -f -y MyName_vg -s 16 hdisk hdisk
# mklv -y MyName_lv -b n MyName_vg 20
# crfs -v jfs -d MyName_lv -m /MyName_mp -A yes -p rw
# mount /MyName_mp
# lptest > /MyName_mp/lptest.out

For using EMCs TimeFinder I have to create a device group.(AIX is working with volumegroups. EMCs TimeFinder is working withdiskgroups.)With the following command the AIX volumegroup MyName_vg is convertedto the diskgroup MyName_dg)

# symvg vg2dg MyName_vg MyName_dg -dgtype RDF1

For to use TimeFinder I have to associate two BCVs to this devicegroup

# symbcv -g MyName_dg associate dev
# symbcv -g MyName_dg associate dev

Now I have to set the BCVs to the defined-state

# rmbcv -a

Using the establish I mirror all data from the original hdisks to the BCVs (including the PVIDs!)

# symmir -g MyName_dg establish -full -exac

I have to wait until the establish is done

# symmir -g MyName_dg -i 10

Query When the establish is done, I have to unmount my filesystem andvaryoff the volumegroup
# umount /MyName_mp
# varyoffvg MyName_vg

Now I am in the right state to split the BCV copies
# symmir -g MyName_dg split -noprompt

When the split is done, I can varyon my volumegroup and mount myFilesystem

# symmir -g MyName_dg -i 10 query
# varyonvg MyName_vg
# mount /MyName_mp

I configure the BCVs

# mkbcv -a

Now I am able to create a new volumegroup from the BCVs
# recreatevg -y MyName_bcv_vg -Y test -L /bcv hdisk hdisk
# lsvg -l MyName_bcv_vgMyName_bcv_vg:LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINTtestMyName_lv jfs 20 20 1 closed/syncd /bcv/MyName_mptestloglv00 jfslog 1 1 1 closed/syncd N/A

# lspv grep -v None
hdisk0 000039386adb2317 rootvg
hdisk 00003938874658c8 MyName_vg
hdisk 0000393887468473 MyName_vg
hdisk 000039388794adb8 MyName_bcv_vg
hdisk 000039388794b7f5 MyName_bcv_vg

About Me

My photo
Sr. Solutions Architect; Expertise: - Cloud Design & Architect - Data Center Consolidation - DC/Storage Virtualization - Technology Refresh - Data Migration - SAN Refresh - Data Center Architecture More info:- diwakar@emcstorageinfo.com
Blog Disclaimer: “The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.”
EMC Storage Product Knowledge Sharing