EMC has traditionally protected failing drives using Dynamic Spares. A Dynamic Spare will take a copy of the data from a failing drive and act as a temporary mirror of the data until the drive can be replaced. The data will then be copied back to the new drive at which point the Dynamic Spare will return the spare drive pool. Two copy processes are required one to copy data to the Dynamic Spare and one to copy data back to the new drive. The copy process may impact performance and, since the Dynamic Spare takes a mirror position, can affect other dynamic devices such as BCVs.


Permanent Sparing overcomes many of these limitations by copying the data only once to a drive which has replaced the failing drive taking its original mirror position. Since the Permanent Spare

does not take an additional mirror position it will not affect Timfinder Mirror operations.

Permanent Sparing in some instances uses Dynamic Sparing as an interim step. This will be described together with the requirements for Permanent Sparing.


- Permanent Sparing is supported on all flavours of DMX.

- A Permanent Spare replaces the original failing drive and will take its original mirror position.

- Required sufficient drives of the same type as the failing drives to be installed and configured as Spares.

- Needs to enabled in the binfile (can be done via Symcli).

- Permanent Sparing will alter the back end bin and cannot be initiated when there is a Configuration Lock on the box.

- When enabled reduces the need for a CE to attend site for drive changes since drives can be replaced in batches.

- Permanent Spare will follow all configuration rules to ensure both performance and redundancy.

- Enginuity Code level must support the feature.


The following Ten Rules provided by Brocade for zoning:
Rule 1: Type of Zoning (Hard, Soft, Hardware Enforced, Soft Porting) – If security is a
priority, then a Hard Zone-based architecture coupled with Hardware Enforcement is
recommended

Rule 2: Use of Aliases – Aliases are optional with zoning. Using aliases should force some
structure when defining your zones. Aliases will also aid future administrators of the zoned
fabric. Structure is the word that comes to mind here.

Rule 3: Does the site need an extra level of security that Secure Fabric OS provides? – Add
Secure Fabric OS into the Zone Architecture if extra security is required.

Rule 4: From where will the fabric be managed? – If a SilkWorm 12000 is part of the fabric,
then the user should use it to administer zoning within the Fabric

Rule 5: Interoperability Fabric – If the fabric includes a SilkWorm 12000 and the user needs to
support a third-party switch product, then he will only be able to do WWN zoning, no
QuickLoop etc.

Rule 6: Is the fabric going to have QLFA or QL in it? – If the user is running Brocade Fabric OS

v4.0, then there are a couple things to consider before creating and setting up QLFA zones:
QuickLoop Zoning
QL/QL zones cannot run on switches running Brocade Fabric OS v4.0. Brocade Fabric
OS v4.0 can still manage (create, remove, update) QL zones on any non-v4.0 switch.
QuickLoop Fabric Assist
Brocade Fabric OS v4.0 cannot have a Fabric Assist host directly connected to it.
However, Brocade Fabric OS v4.0 can still be part of a Fabric Assist zone if a Fabric
Assist host is connected to a non-v4.0 switch.

Rule 7: Testing a (new) zone configuration. – Before implementing a zone the user should run
the Zone Analyzer and isolate any possible problems. This is especially useful, as fabrics
increase in size.

Rule 8: Prep work needed before enabling/changing a zone configuration. – Before enabling or
changing a fabric configuration, the user should verify that no one is issuing I/O in the zone that will change. This can have a serious impact within the fabric like databases breaking, node
panics etc. This goes the same for disk(s) that are mounted. If the user changes a zone, and a
node is mounting the storage in question, it may “vanish” due to the zone change. This may
cause nodes to panic, applications to break etc. Changes to the zone should be done during
preventative maintenance. Most sites have an allocated time each day to perform maintenance
work.

Rule 9: Potential post work requirements after enabling/changing a zone configuration. – After
changing or enabling a zone configuration, the user should confirm that nodes and storage are
able to see and access one another. Depending on the platform, the user may need to reboot one
or more nodes in the fabric with the new changes to the zone.

Rule 10: LUN masking in general. – LUN Masking should be used in conjunction with fabric
zoning for maximum effectiveness.

An LSAN is a Logical Storage Area Network that spans multiple Physical fabrics and allows specified devices from these autonomous fabrics to communicate with each other without merging the physical fabrics. In other word, A logical network that spans multiple physical fabrics. It allows specified devices from the autonomous fabric to communicate with each other using a FC router without merging the physical fabrics.
- A LSAN zone is a traditional zone with a special naming convention.
- Zone names must start with “LSAN_” or “lsan_” or “LSan_”
- LSAN zones are architecturally comaptiable with FOS and M-EOS
- FC Router uses LSAN zones to determine which devices need to be exported/imported into which routed fabrics.
- LSAN zones must be configured in all fabrics where the shared physical devices exist.
- The router performs zoning enforcement for edge fabrics at the ingress Router EX Port.

LSAN Implementation Rules:

- LSAN zone members must be defined using the device Port WWN, Zone members, including aliases, need to be defined using WWPN.
- LSAN zone name on the routed fabrics do not need to be identical, but is recommended for

ease of administration and troubleshooting.
- LSAN zones in routed fabrics sharing devices are not required to have identical membership. Shared devices must exist in both fabrics LSAN zones.

Once the LSAN zones are enabled, you will be able to check status of LSAN zones and members from the FC Router using the command lsanzoneshow –s

Router:admin>lsanzoneshow –s
Fabric ID: 1 Zone Name: lsan_zone1
10:00:00:00:98:23:12:11 Exist
10:00:00:00:98:23:ab:cd Imported


Fabric ID: 2 Zone Name: lsan_zone1
10:00:00:00:98:23:12:11 Imported
10:00:00:00:98:23:ab:cd Exist

The output reveals what devices are “Exported (Exist) and Imported from all the routed fabrics.

PowerPath Migration Enabler is a host-based software product that enables other technologies, such as array-based replication and virtualization, to eliminate -application downtime during data migrations or virtualization implementations. PowerPath Migration Enabler allows EMC Open Replicator for Symmetrix and EMC Invista customers to eliminate downtime during data migrations from EMC storage to Symmetrix, and during virtualized deployments to Invista.. PowerPath Migration Enabler—which leverages the same underlying technology as PowerPath—keeps arrays in synch during Open Replicator for Symmetrix data migrations, with minimal impact to host resources. It also enables seamless deployment of Invista virtualized environments by encapsulating (or bringing under its control)

the volumes that will be virtualized. In addition,
EMC PowerPath boasts the following benefits:

PowerPath Migration Enabler with Open Replicator for Symmetrix:
¨ Eliminates planned downtime
¨ Provides flexibility in time to perform migration
PowerPath Migration Enabler with EMC Invista:
¨ Eliminates planned downtime
¨ Eliminates need for data copy and additional storage for data migration
¨ I/O redirection allows Administrators to “preview” deployment without committing to redirection

EMC SRDF Mode

Posted by Diwakar ADD COMMENTS

Conceptually, even operationally, SRDF is very similar to Timefinder. About the only difference is that SRDF works across Symms; while Timefinder works internally to one Symm.That difference, intersymm vs intrasym, means that SRDF operations can cover quite a bit of ground geographically. With the advent of geographically separated symms, the integrity of the data from one symm to the other becomes a concern. EMC has a number of operational modes in which the SRDF operates. The choice between these operational modes is a balancing act between how quickly the calling application gets an acknowledgement back versus how sure you need to be that the data has been received on the remote symm.

Synchronous mode

Synchronous mode basically means that the remote symm must have the I/O in cache before the calling application receives the acknowledgement. Depending on distance between symms, this may have a significant impact on performance which is the main reason that EMC suggests this set up in a campus (damn near colocated) environment only.

If you're particularly paranoid about ensuring data on one symm is on the other, you can enable theDomino effect (I think you're supposed to be hearing suspense music in the background right about now...). Basically, the

domino effect ensures that the R1 devices will become "not ready" if the R2 devices can't be reached for any reason - effectively shutting down the filesystem(s) untilthe problem can be resolved.

Semi-synchronous mode

In semi-synchronous mode, the R2 devices are one (or less) write I/O out of sync with their R1 device counterparts. The application gets the acknowledgement as soon as the first write I/O gets to the local cache. The second I/O isn't acknowledged until the first is in the remote cache. This should speed up the application over the synchronous mode. It does, however, mean that your data might be a bit out of sync with the local symm.

Adaptive Copy-Write Pending

This mode copies data over to the R2 volumes as quickly as it can; however, doesn't delay the acknowledgement to the application. This mode is useful where some data loss is permissable andlocal performance is paramount.

There's a configurable skew parameter that lists the maximum allowable dirty tracks. Once that number of pending I/Os is reached, the system switches to the predetermined mode (probably semi-synchronous) until the remote symm catches up. At that point, it switches back to adaptive copy-write pending mode.



Flash Drives in DMX

Posted by Diwakar ADD COMMENTS

EMC announce Flash Drive support in CX-4 (A new generation CLARiiON) and DMX-4. EMC started supporting TIER 0 with Flash Drive. Flash drives provide maximum performance for latency sensitive applications. Flash drives, also referred to as solid state drives (SSD), contain no moving parts and appear as standard Fibre Channel drives to existing Symmetrix management tools, allowing administrators to manage tier 0 without special processes or custom tools. Tier 0 Flash storage is ideally suited for applications with high transaction rates and those requiring the fastest possible retrieval and storage of data, such as currency exchange and electronic trading systems, or real time data feed processing.

A Symmetrix DMX-4 with Flash drives can deliver single-millisecond application response times and up to 30 times more IOPS than traditional 15,000 rpm Fibre Channel disk drives. Additionally, because there are no mechanical components, Flash drives require up to 98 percent less energy per IOPS than traditional disk drives. Database acceleration is one example for Flash drive performance impact. Flash drive storage can be used to accelerate online transaction processing (OLTP), accelerating performance with large indices and frequently accessed database tables. Examples of OLTP applications include Oracle and DB2 databases, and SAP R/3. Flash drives can also improve performance in batch processing and shorten batch processing windows.

Flash drive performance will help any application that needs the lowest latency possible. Examples include

· Algorithmic trading

· Currency exchange and arbitrage

· Trade optimization

· Realtime data/feed processing

· Contextual web advertising

· Other realtime transaction systems

· Data modeling

Flash drives are most beneficial with random read misses (RRM). If the RRM percentage is low, Flash drives may show less benefit since writes and sequential reads/writes already leverage Symmetrix cache to achieve the lowest possible response times. The local EMC SPEED Guru can do a performance analysis of the current workload to determine how the customer may benefit from Flash drives. Write response times of long distance SRDF/S replication could be high relative to response times from Flash drives. Flash drives cannot help with reducing response time due to long distance replication. However, read misses still enjoy low response times.

Flash drives can be used as clone source and target volumes. Flash drives can be used as SNAP source volumes. Virtual LUN Migration supports migrating volumes to and from Flash drives. Flash drives can be used with SRDF/s and SRDF/A. Metavolumes can be configured on Flash drives as long as all of the logicals in the metagroup are on Flash drives.

Limitations and Restrictions of Flash drives:

Due to the new nature of the technology, not all Symmetrix functions are currently supported on Flash drives. The following is a list of the current limitations and restrictions of Flash drives.

Delta Set Extension and SNAP pools cannot be configured on Flash drives.
• RAID 1 and RAID 6 protection, as well as unprotected volumes, are currently not supported with Flash drives.
• TimeFinder/Mirror is currently not supported with Flash drives.
• iSeries volumes currently cannot be configured on Flash drives.
• Open Replicator of volumes configured on Flash drives is not currently supported.
• Secure Erase of Flash drives is not currently supported.
• Compatible Flash for z/OS and Compatible Native Flash for z/OS are not currently supported.
• TPF is not currently supported.

EMC Invista

Posted by Diwakar ADD COMMENTS

Everybody talks about Virtualization like Server Based virtualization, Host Based Virtualization and now EMC launch EMC Invista ( Probably First Network Virtualization product) Product in May, 2005. EMC Invista leverages intelligent SAN switches to deploy network-based, block-level storage virtualization. Invista takes advantage of specialized processing power in intelligent switches to perform I/O redirection at wire speed.
Its revolutionary split-path architecture places the virtualization intelligence in the network, where best applied, and with no impact on server or application performance. As a result, EMC Invista delivers higher application availability, reduced administrative overhead, more effective and efficient use of storage resources, and reduced costs when compared to other solutions using alternative approaches.
EMC Invista provides data migrations, pooling, tiered storage, non-disruptive technology refreshes across heterogeneous arrays, and operational efficiency via centralized and standardized volume management .It lets organizations reduce downtime while increasing data availability, and utilize storage assets efficiently, effectively, and economically.

¨ Enables non-disruptive data migration
¨ Provides dynamic volume mobility, network-based volume management, and heterogeneous point-in-time copies
¨ Employs a split-path architecture that leverages intelligent SAN switches for high performance and data integrity
¨ Integrates with EMC ControlCenter and Replication Manager for enterprise management, and is VMware ESX certified
¨ Supports heterogeneous storage environments
¨ Runs on intelligent switches using Brocade and Cisco technology, leveraging standard APIs.

About Me

My photo
Sr. Solutions Architect; Expertise: - Cloud Design & Architect - Data Center Consolidation - DC/Storage Virtualization - Technology Refresh - Data Migration - SAN Refresh - Data Center Architecture More info:- diwakar@emcstorageinfo.com
Blog Disclaimer: “The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.”
EMC Storage Product Knowledge Sharing