An LSAN is a Logical Storage Area Network that spans multiple Physical fabrics and allows specified devices from these autonomous fabrics to communicate with each other without merging the physical fabrics. In other word, A logical network that spans multiple physical fabrics. It allows specified devices from the autonomous fabric to communicate with each other using a FC router without merging the physical fabrics.
- A LSAN zone is a traditional zone with a special naming convention.
- Zone names must start with “LSAN_” or “lsan_” or “LSan_”
- LSAN zones are architecturally comaptiable with FOS and M-EOS
- FC Router uses LSAN zones to determine which devices need to be exported/imported into which routed fabrics.
- LSAN zones must be configured in all fabrics where the shared physical devices exist.
- The router performs zoning enforcement for edge fabrics at the ingress Router EX Port.

LSAN Implementation Rules:

- LSAN zone members must be defined using the device Port WWN, Zone members, including aliases, need to be defined using WWPN.
- LSAN zone name on the routed fabrics do not need to be identical, but is recommended for

ease of administration and troubleshooting.
- LSAN zones in routed fabrics sharing devices are not required to have identical membership. Shared devices must exist in both fabrics LSAN zones.

Once the LSAN zones are enabled, you will be able to check status of LSAN zones and members from the FC Router using the command lsanzoneshow –s

Router:admin>lsanzoneshow –s
Fabric ID: 1 Zone Name: lsan_zone1
10:00:00:00:98:23:12:11 Exist
10:00:00:00:98:23:ab:cd Imported


Fabric ID: 2 Zone Name: lsan_zone1
10:00:00:00:98:23:12:11 Imported
10:00:00:00:98:23:ab:cd Exist

The output reveals what devices are “Exported (Exist) and Imported from all the routed fabrics.

PowerPath Migration Enabler is a host-based software product that enables other technologies, such as array-based replication and virtualization, to eliminate -application downtime during data migrations or virtualization implementations. PowerPath Migration Enabler allows EMC Open Replicator for Symmetrix and EMC Invista customers to eliminate downtime during data migrations from EMC storage to Symmetrix, and during virtualized deployments to Invista.. PowerPath Migration Enabler—which leverages the same underlying technology as PowerPath—keeps arrays in synch during Open Replicator for Symmetrix data migrations, with minimal impact to host resources. It also enables seamless deployment of Invista virtualized environments by encapsulating (or bringing under its control)

the volumes that will be virtualized. In addition,
EMC PowerPath boasts the following benefits:

PowerPath Migration Enabler with Open Replicator for Symmetrix:
¨ Eliminates planned downtime
¨ Provides flexibility in time to perform migration
PowerPath Migration Enabler with EMC Invista:
¨ Eliminates planned downtime
¨ Eliminates need for data copy and additional storage for data migration
¨ I/O redirection allows Administrators to “preview” deployment without committing to redirection

EMC SRDF Mode

Posted by Diwakar ADD COMMENTS

Conceptually, even operationally, SRDF is very similar to Timefinder. About the only difference is that SRDF works across Symms; while Timefinder works internally to one Symm.That difference, intersymm vs intrasym, means that SRDF operations can cover quite a bit of ground geographically. With the advent of geographically separated symms, the integrity of the data from one symm to the other becomes a concern. EMC has a number of operational modes in which the SRDF operates. The choice between these operational modes is a balancing act between how quickly the calling application gets an acknowledgement back versus how sure you need to be that the data has been received on the remote symm.

Synchronous mode

Synchronous mode basically means that the remote symm must have the I/O in cache before the calling application receives the acknowledgement. Depending on distance between symms, this may have a significant impact on performance which is the main reason that EMC suggests this set up in a campus (damn near colocated) environment only.

If you're particularly paranoid about ensuring data on one symm is on the other, you can enable theDomino effect (I think you're supposed to be hearing suspense music in the background right about now...). Basically, the

domino effect ensures that the R1 devices will become "not ready" if the R2 devices can't be reached for any reason - effectively shutting down the filesystem(s) untilthe problem can be resolved.

Semi-synchronous mode

In semi-synchronous mode, the R2 devices are one (or less) write I/O out of sync with their R1 device counterparts. The application gets the acknowledgement as soon as the first write I/O gets to the local cache. The second I/O isn't acknowledged until the first is in the remote cache. This should speed up the application over the synchronous mode. It does, however, mean that your data might be a bit out of sync with the local symm.

Adaptive Copy-Write Pending

This mode copies data over to the R2 volumes as quickly as it can; however, doesn't delay the acknowledgement to the application. This mode is useful where some data loss is permissable andlocal performance is paramount.

There's a configurable skew parameter that lists the maximum allowable dirty tracks. Once that number of pending I/Os is reached, the system switches to the predetermined mode (probably semi-synchronous) until the remote symm catches up. At that point, it switches back to adaptive copy-write pending mode.



Flash Drives in DMX

Posted by Diwakar ADD COMMENTS

EMC announce Flash Drive support in CX-4 (A new generation CLARiiON) and DMX-4. EMC started supporting TIER 0 with Flash Drive. Flash drives provide maximum performance for latency sensitive applications. Flash drives, also referred to as solid state drives (SSD), contain no moving parts and appear as standard Fibre Channel drives to existing Symmetrix management tools, allowing administrators to manage tier 0 without special processes or custom tools. Tier 0 Flash storage is ideally suited for applications with high transaction rates and those requiring the fastest possible retrieval and storage of data, such as currency exchange and electronic trading systems, or real time data feed processing.

A Symmetrix DMX-4 with Flash drives can deliver single-millisecond application response times and up to 30 times more IOPS than traditional 15,000 rpm Fibre Channel disk drives. Additionally, because there are no mechanical components, Flash drives require up to 98 percent less energy per IOPS than traditional disk drives. Database acceleration is one example for Flash drive performance impact. Flash drive storage can be used to accelerate online transaction processing (OLTP), accelerating performance with large indices and frequently accessed database tables. Examples of OLTP applications include Oracle and DB2 databases, and SAP R/3. Flash drives can also improve performance in batch processing and shorten batch processing windows.

Flash drive performance will help any application that needs the lowest latency possible. Examples include

· Algorithmic trading

· Currency exchange and arbitrage

· Trade optimization

· Realtime data/feed processing

· Contextual web advertising

· Other realtime transaction systems

· Data modeling

Flash drives are most beneficial with random read misses (RRM). If the RRM percentage is low, Flash drives may show less benefit since writes and sequential reads/writes already leverage Symmetrix cache to achieve the lowest possible response times. The local EMC SPEED Guru can do a performance analysis of the current workload to determine how the customer may benefit from Flash drives. Write response times of long distance SRDF/S replication could be high relative to response times from Flash drives. Flash drives cannot help with reducing response time due to long distance replication. However, read misses still enjoy low response times.

Flash drives can be used as clone source and target volumes. Flash drives can be used as SNAP source volumes. Virtual LUN Migration supports migrating volumes to and from Flash drives. Flash drives can be used with SRDF/s and SRDF/A. Metavolumes can be configured on Flash drives as long as all of the logicals in the metagroup are on Flash drives.

Limitations and Restrictions of Flash drives:

Due to the new nature of the technology, not all Symmetrix functions are currently supported on Flash drives. The following is a list of the current limitations and restrictions of Flash drives.

Delta Set Extension and SNAP pools cannot be configured on Flash drives.
• RAID 1 and RAID 6 protection, as well as unprotected volumes, are currently not supported with Flash drives.
• TimeFinder/Mirror is currently not supported with Flash drives.
• iSeries volumes currently cannot be configured on Flash drives.
• Open Replicator of volumes configured on Flash drives is not currently supported.
• Secure Erase of Flash drives is not currently supported.
• Compatible Flash for z/OS and Compatible Native Flash for z/OS are not currently supported.
• TPF is not currently supported.

EMC Invista

Posted by Diwakar ADD COMMENTS

Everybody talks about Virtualization like Server Based virtualization, Host Based Virtualization and now EMC launch EMC Invista ( Probably First Network Virtualization product) Product in May, 2005. EMC Invista leverages intelligent SAN switches to deploy network-based, block-level storage virtualization. Invista takes advantage of specialized processing power in intelligent switches to perform I/O redirection at wire speed.
Its revolutionary split-path architecture places the virtualization intelligence in the network, where best applied, and with no impact on server or application performance. As a result, EMC Invista delivers higher application availability, reduced administrative overhead, more effective and efficient use of storage resources, and reduced costs when compared to other solutions using alternative approaches.
EMC Invista provides data migrations, pooling, tiered storage, non-disruptive technology refreshes across heterogeneous arrays, and operational efficiency via centralized and standardized volume management .It lets organizations reduce downtime while increasing data availability, and utilize storage assets efficiently, effectively, and economically.

¨ Enables non-disruptive data migration
¨ Provides dynamic volume mobility, network-based volume management, and heterogeneous point-in-time copies
¨ Employs a split-path architecture that leverages intelligent SAN switches for high performance and data integrity
¨ Integrates with EMC ControlCenter and Replication Manager for enterprise management, and is VMware ESX certified
¨ Supports heterogeneous storage environments
¨ Runs on intelligent switches using Brocade and Cisco technology, leveraging standard APIs.

Single-initiator zoning rule :--Each HBA port must be in a separate zone that contains it and the SP ports with which it communicates. EMC recommends single-initiator zoning as a best practice.
Fibre Channel fan-in rule:--A server can be zoned to a maximum of 4 storage systems.
Fibre Channel fan-out rule:-The Navisphere software license determines the number of servers you can connect to a CX3-10c, CX3-20c,CX3-20f, CX3-40c, CX3-40f, or CX3-80 storage systems. The maximum number of connections between servers and a storage system is defined by the number of initiators supported per storage-system SP. An initiator is an HBA
port in a that can access a storage system. Note that some HBAs have multiple ports. Each HBA port that is zoned to an SP port is one path to that SP and the storage system containing that SP. Depending on the type of storage system and the connections between its SPs and the switches, an HBA port can be zoned through different switch ports to the same SP port or to different SP ports, resulting in multiple paths between the HBA port and an SP and/or the storage system. Note that the failover software running on the server may limit the number of paths supported from the server to a single storage-system SP and from a server to the storage system.

Storage systems with Fibre Channel data ports :- CX3-10c (SP ports 2 and 3), CX3-20c (SP ports 4 and 5), CX3-20f (all ports), CX3-40c (SP ports 4 and 5), CX3-40f (all ports), CX3-80 (all ports).
Number of servers and storage systems As many as the available switch ports, provided each server follows the fan-in rule above and each storage system follows the fan-out rule above, using WWPN switch zoning.
Fibre connectivity
Fibre Channel Switched Fabric (FC-SW) connection to all server types.
Fibre Channel switch terminologySupported Fibre Channel switches.
Fabric - One or more switches connected by E_Ports. E_Ports are switch ports that are used only for connecting switches together.
ISL - (Inter Switch Link). A link that connects two E_Ports on two different switches.
Path - A path is a connection between an initiator (such as an HBA port) and a target (such as an SP port in a storage system). Each HBA port that is zoned to a port on an SP is one path to that SP and the storage system containing that SP. Depending on the type of storage system and the connections between its SPs and the switch fabric, an HBA port can be zoned through different switch ports to the same SP port or to different SP ports, resulting in multiple paths between the HBA port and an SP and/or the storage system. Note that the failover software running on the server may limit the number of paths supported from the server to a single storage-system
SP and from a server to the storage system.


LUN to bind Restrictions and recommendations ----------- Any LUN You can bind only unbound disk modules. All disk modules in a must have the same capacity to fully use the modules' storage space.
--- In AX-series storage systems, binding disks into LUNs is not supported.
RAID 5 - You must bind a minimum of three and no more than sixteen disk modules. We recommend you bind five modules for more efficient use of disk space. In a storage system with SCSI disks, you should use modules on different SCSI buses for highest availability. *
RAID 3 - You must bind exactly five or nine disk modules in a storage system with Fibre Channel disks and exactly five disk modules in a storage system with SCSI disks. In a storage system with SCSI disks, you should use modules on separate SCSI buses for highest availability. You cannot bind a RAID 3 LUN until you have allocated storage-system memory for the LUN, unless on a FC4700 or CX-series *
IMPORTANT: RAID 3, non FC4700/CX-series does not allow caching, when binding RAID 3 LUNS, the -c cache-flags switch do not apply
RAID 1 - You must bind exactly two disk modules. *
RAID 0 - You must bind a minimum of three and no more than sixteen disk modules. If possible in a storage system with SCSI disks, use modules on different SCSI buses for highest availability. *
RAID 1/0 - You must bind a minimum of four but no more than sixteen disk modules, and it must be an even number of modules. Navisphere Manager pairs modules into mirrored images in the order in which you select them. The first and second modules you select are a pair of mirrored images; the third and fourth modules you select are another pair of mirrored images; and so on. The first module you select in each pair is the primary image, and the second module is the secondary image. If possible in a storage system with SCSI disks the modules you select for each pair should be on different buses for highest availability. *individual disk unit none.

About Me

My photo
Sr. Solutions Architect; Expertise: - Cloud Design & Architect - Data Center Consolidation - DC/Storage Virtualization - Technology Refresh - Data Migration - SAN Refresh - Data Center Architecture More info:- diwakar@emcstorageinfo.com
Blog Disclaimer: “The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.”
EMC Storage Product Knowledge Sharing