Single-initiator zoning rule :--Each HBA port must be in a separate zone that contains it and the SP ports with which it communicates. EMC recommends single-initiator zoning as a best practice.
Fibre Channel fan-in rule:--A server can be zoned to a maximum of 4 storage systems.
Fibre Channel fan-out rule:-The Navisphere software license determines the number of servers you can connect to a CX3-10c, CX3-20c,CX3-20f, CX3-40c, CX3-40f, or CX3-80 storage systems. The maximum number of connections between servers and a storage system is defined by the number of initiators supported per storage-system SP. An initiator is an HBA
port in a that can access a storage system. Note that some HBAs have multiple ports. Each HBA port that is zoned to an SP port is one path to that SP and the storage system containing that SP. Depending on the type of storage system and the connections between its SPs and the switches, an HBA port can be zoned through different switch ports to the same SP port or to different SP ports, resulting in multiple paths between the HBA port and an SP and/or the storage system. Note that the failover software running on the server may limit the number of paths supported from the server to a single storage-system SP and from a server to the storage system.

Storage systems with Fibre Channel data ports :- CX3-10c (SP ports 2 and 3), CX3-20c (SP ports 4 and 5), CX3-20f (all ports), CX3-40c (SP ports 4 and 5), CX3-40f (all ports), CX3-80 (all ports).
Number of servers and storage systems As many as the available switch ports, provided each server follows the fan-in rule above and each storage system follows the fan-out rule above, using WWPN switch zoning.
Fibre connectivity
Fibre Channel Switched Fabric (FC-SW) connection to all server types.
Fibre Channel switch terminologySupported Fibre Channel switches.
Fabric - One or more switches connected by E_Ports. E_Ports are switch ports that are used only for connecting switches together.
ISL - (Inter Switch Link). A link that connects two E_Ports on two different switches.
Path - A path is a connection between an initiator (such as an HBA port) and a target (such as an SP port in a storage system). Each HBA port that is zoned to a port on an SP is one path to that SP and the storage system containing that SP. Depending on the type of storage system and the connections between its SPs and the switch fabric, an HBA port can be zoned through different switch ports to the same SP port or to different SP ports, resulting in multiple paths between the HBA port and an SP and/or the storage system. Note that the failover software running on the server may limit the number of paths supported from the server to a single storage-system
SP and from a server to the storage system.


LUN to bind Restrictions and recommendations ----------- Any LUN You can bind only unbound disk modules. All disk modules in a must have the same capacity to fully use the modules' storage space.
--- In AX-series storage systems, binding disks into LUNs is not supported.
RAID 5 - You must bind a minimum of three and no more than sixteen disk modules. We recommend you bind five modules for more efficient use of disk space. In a storage system with SCSI disks, you should use modules on different SCSI buses for highest availability. *
RAID 3 - You must bind exactly five or nine disk modules in a storage system with Fibre Channel disks and exactly five disk modules in a storage system with SCSI disks. In a storage system with SCSI disks, you should use modules on separate SCSI buses for highest availability. You cannot bind a RAID 3 LUN until you have allocated storage-system memory for the LUN, unless on a FC4700 or CX-series *
IMPORTANT: RAID 3, non FC4700/CX-series does not allow caching, when binding RAID 3 LUNS, the -c cache-flags switch do not apply
RAID 1 - You must bind exactly two disk modules. *
RAID 0 - You must bind a minimum of three and no more than sixteen disk modules. If possible in a storage system with SCSI disks, use modules on different SCSI buses for highest availability. *
RAID 1/0 - You must bind a minimum of four but no more than sixteen disk modules, and it must be an even number of modules. Navisphere Manager pairs modules into mirrored images in the order in which you select them. The first and second modules you select are a pair of mirrored images; the third and fourth modules you select are another pair of mirrored images; and so on. The first module you select in each pair is the primary image, and the second module is the secondary image. If possible in a storage system with SCSI disks the modules you select for each pair should be on different buses for highest availability. *individual disk unit none.

The global reserved LUN pool works with replication software, such as SnapView, SAN Copy, and MirrorView/A to store data or information required to complete a replication task. The reserved LUN pool consists of one or more private LUNs. The LUN becomes private when you add it to the reserved LUN pool. Since the LUNs in the reserved LUN pool are private LUNs, they cannot belong to storage groups and a server cannot perform I/O to them.

Before starting a replication task, the reserved LUN pool must contain at least one LUN for each source LUN that will participate in the task. You can add any available LUNs to the reserved LUN pool. Each storage system manages its own LUN pool space and assigns a separate reserved LUN (or multiple LUNs) to each source LUN.
All replication software that uses the reserved LUN pool shares the resources of the reserved LUN pool. For example, if you are running an incremental SAN Copy session on a LUN and a snapshot session on another LUN, the reserved LUN pool must contain at least two LUNs - one for each source LUN. If both sessions are running on the same source LUN, the sessions will share a reserved LUN.

Estimating a suitable reserved LUN pool size

Each reserved LUN can vary in size. However, using the same size for each LUN in the pool is easier to manage because the LUNs are assigned without regard to size; that is, the first available free LUN in the global reserved LUN pool is assigned. Since you cannot control which reserved LUNs are being used for a particular replication session, EMC recommends that you use a standard size for all reserved LUNs. The size of these LUNs are dependent on If you want to optimize space utilization, the recommendation would be to create many small reserved LUNs, which allows for sessions requiring minimal reserved LUN space to use one or a few reserved LUNs, and sessions requiring more reserved LUN space to use multiple reserved LUNs. On the other hand, if you want to optimize the total number of source LUNs, the recommendation would be to create many large reserved LUNs, so that even those sessions which require more reserved LUN space only consume a single reserved LUN.

The following considerations should assist in estimating a suitable reserved LUN pool size for the storage system.
If you wish to optimize space utilization , use the size of the smallest source LUN as the basis of your calculations. If you wish to optimize the total number of source LUNs , use the size of the largest source LUN as the basis of your calculations. If you have a standard online transaction processing configuration (OLTP), use reserved LUNs sized at 10-20%. This tends to be an appropriate size to accommodate the copy-on-first-write activity.
If you plan on if you plan on creating multiple sessions per source LUN, anticipate a large number of writes to the source LUN, or anticipate a long duration time for the session, you may also need to allocate additional reserved LUNs. With any of these cases, you should increase the calculation accordingly. For instance, if you plan to have 4 concurrent sessions running for a given source LUN, you might want to increase the estimated size by 4 – raising the typical size to 40-80%.

Lets talk about SRDF feature in DMX for disaster recovery/remote replication/data migration. In today’s business environment it is imperative to have the necessary equipment and processes in place to meet stringent service-level requirements. Downtime is no longer an option. This means you may need to remotely replicate your business data to ensure availability. Remote data replication is the most challenging of all disaster recovery activities. Without the right solution it can be complex, error prone, labor intensive, and time consuming.

SDRF/S addresses these problems by maintaining real-time data mirrors of specified Symmetrix logical volumes. The implementation is a remote mirror, Symmetrix to Symmetrix.
¨ The most flexible synchronous solution in the industry
¨ Cost effective solution with native GigE connectivity
¨ Proven reliability
¨ Simultaneous operation with SRDF/A, SRDF/DM and/or SRDF/AR in the same system
¨ Dynamic, Non-disruptive mode change between SRDF/S and SRDF/A
¨ Concurrent SRDF/S and SRDF/A operations from the same source device
¨ A powerful component of SRDF/Star, multi-site continuous replication over distance with zero RPO service levels.
¨ Business resumption is now a matter of a systems restart. No transportation, restoration, or restoring from tape is required. And SRDF/S supports any environment that connects to a Symmetrix system – mainframe, open system, NT, AS4000, or Celerra.
¨ ESCON fiber, fiber channel, Gigabit Ethernet, T3, ATM, I/P, and Sonet rings are supported, providing choice and flexibility to meet specific service level requirements. SRDF/S can provide real-time disk mirrors across long distances without application performance degradation, along with reduced communication costs. System consistency is provided by ensuring that all related data volumes are handled identically – a feature unique to EMC. Hope this litte article will help you to understand about SRDF/S.

TimeFinder/Clone creates full-volume copies of production data, allowing you to run simultaneous tasks in parallel on Symmetrix systems. In addition to real-time, nondisruptive backup and restore, TimeFinder/Clone is used to compress the cycle time for such processes as application testing, software development, and loading or updating a data warehouse. This significantly increases efficiency and productivity while maintaining continuous support for the production needs of the enterprise.
¨ Ability to protect Clone BCVs with RAID-5
¨ Create instant Mainframe SNAPs of datasets or logical volumes for OS/390 data, compatible with STK SNAPSHOT for RVA.
¨ Facilitate more rapid testing of new versions of operating systems, data base managers, file systems etc., as well as new applications Load or update data warehouses as needed
¨ Allow pro-active database validation, thus minimizing exposure to faulty applications.
¨ Allows multiple copies to be retained at different checkpoints for lowered RPO and RTO, thus improving service levels.
¨ Can be applied to data volumes across multiple Symmetrix devices using EMC’s unique consistency technology (TimeFinder/Consistency Group option required).

About Me

My photo
Sr. Solutions Architect; Expertise: - Cloud Design & Architect - Data Center Consolidation - DC/Storage Virtualization - Technology Refresh - Data Migration - SAN Refresh - Data Center Architecture More info:- diwakar@emcstorageinfo.com
Blog Disclaimer: “The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.”
EMC Storage Product Knowledge Sharing