The following Ten Rules provided by Brocade for zoning:
Rule 1: Type of Zoning (Hard, Soft, Hardware Enforced, Soft Porting) – If security is a
priority, then a Hard Zone-based architecture coupled with Hardware Enforcement is
recommended

Rule 2: Use of Aliases – Aliases are optional with zoning. Using aliases should force some
structure when defining your zones. Aliases will also aid future administrators of the zoned
fabric. Structure is the word that comes to mind here.

Rule 3: Does the site need an extra level of security that Secure Fabric OS provides? – Add
Secure Fabric OS into the Zone Architecture if extra security is required.

Rule 4: From where will the fabric be managed? – If a SilkWorm 12000 is part of the fabric,
then the user should use it to administer zoning within the Fabric

Rule 5: Interoperability Fabric – If the fabric includes a SilkWorm 12000 and the user needs to
support a third-party switch product, then he will only be able to do WWN zoning, no
QuickLoop etc.

Rule 6: Is the fabric going to have QLFA or QL in it? – If the user is running Brocade Fabric OS

v4.0, then there are a couple things to consider before creating and setting up QLFA zones:
QuickLoop Zoning
QL/QL zones cannot run on switches running Brocade Fabric OS v4.0. Brocade Fabric
OS v4.0 can still manage (create, remove, update) QL zones on any non-v4.0 switch.
QuickLoop Fabric Assist
Brocade Fabric OS v4.0 cannot have a Fabric Assist host directly connected to it.
However, Brocade Fabric OS v4.0 can still be part of a Fabric Assist zone if a Fabric
Assist host is connected to a non-v4.0 switch.

Rule 7: Testing a (new) zone configuration. – Before implementing a zone the user should run
the Zone Analyzer and isolate any possible problems. This is especially useful, as fabrics
increase in size.

Rule 8: Prep work needed before enabling/changing a zone configuration. – Before enabling or
changing a fabric configuration, the user should verify that no one is issuing I/O in the zone that will change. This can have a serious impact within the fabric like databases breaking, node
panics etc. This goes the same for disk(s) that are mounted. If the user changes a zone, and a
node is mounting the storage in question, it may “vanish” due to the zone change. This may
cause nodes to panic, applications to break etc. Changes to the zone should be done during
preventative maintenance. Most sites have an allocated time each day to perform maintenance
work.

Rule 9: Potential post work requirements after enabling/changing a zone configuration. – After
changing or enabling a zone configuration, the user should confirm that nodes and storage are
able to see and access one another. Depending on the platform, the user may need to reboot one
or more nodes in the fabric with the new changes to the zone.

Rule 10: LUN masking in general. – LUN Masking should be used in conjunction with fabric
zoning for maximum effectiveness.

An LSAN is a Logical Storage Area Network that spans multiple Physical fabrics and allows specified devices from these autonomous fabrics to communicate with each other without merging the physical fabrics. In other word, A logical network that spans multiple physical fabrics. It allows specified devices from the autonomous fabric to communicate with each other using a FC router without merging the physical fabrics.
- A LSAN zone is a traditional zone with a special naming convention.
- Zone names must start with “LSAN_” or “lsan_” or “LSan_”
- LSAN zones are architecturally comaptiable with FOS and M-EOS
- FC Router uses LSAN zones to determine which devices need to be exported/imported into which routed fabrics.
- LSAN zones must be configured in all fabrics where the shared physical devices exist.
- The router performs zoning enforcement for edge fabrics at the ingress Router EX Port.

LSAN Implementation Rules:

- LSAN zone members must be defined using the device Port WWN, Zone members, including aliases, need to be defined using WWPN.
- LSAN zone name on the routed fabrics do not need to be identical, but is recommended for

ease of administration and troubleshooting.
- LSAN zones in routed fabrics sharing devices are not required to have identical membership. Shared devices must exist in both fabrics LSAN zones.

Once the LSAN zones are enabled, you will be able to check status of LSAN zones and members from the FC Router using the command lsanzoneshow –s

Router:admin>lsanzoneshow –s
Fabric ID: 1 Zone Name: lsan_zone1
10:00:00:00:98:23:12:11 Exist
10:00:00:00:98:23:ab:cd Imported


Fabric ID: 2 Zone Name: lsan_zone1
10:00:00:00:98:23:12:11 Imported
10:00:00:00:98:23:ab:cd Exist

The output reveals what devices are “Exported (Exist) and Imported from all the routed fabrics.

PowerPath Migration Enabler is a host-based software product that enables other technologies, such as array-based replication and virtualization, to eliminate -application downtime during data migrations or virtualization implementations. PowerPath Migration Enabler allows EMC Open Replicator for Symmetrix and EMC Invista customers to eliminate downtime during data migrations from EMC storage to Symmetrix, and during virtualized deployments to Invista.. PowerPath Migration Enabler—which leverages the same underlying technology as PowerPath—keeps arrays in synch during Open Replicator for Symmetrix data migrations, with minimal impact to host resources. It also enables seamless deployment of Invista virtualized environments by encapsulating (or bringing under its control)

the volumes that will be virtualized. In addition,
EMC PowerPath boasts the following benefits:

PowerPath Migration Enabler with Open Replicator for Symmetrix:
¨ Eliminates planned downtime
¨ Provides flexibility in time to perform migration
PowerPath Migration Enabler with EMC Invista:
¨ Eliminates planned downtime
¨ Eliminates need for data copy and additional storage for data migration
¨ I/O redirection allows Administrators to “preview” deployment without committing to redirection

EMC SRDF Mode

Posted by Diwakar ADD COMMENTS

Conceptually, even operationally, SRDF is very similar to Timefinder. About the only difference is that SRDF works across Symms; while Timefinder works internally to one Symm.That difference, intersymm vs intrasym, means that SRDF operations can cover quite a bit of ground geographically. With the advent of geographically separated symms, the integrity of the data from one symm to the other becomes a concern. EMC has a number of operational modes in which the SRDF operates. The choice between these operational modes is a balancing act between how quickly the calling application gets an acknowledgement back versus how sure you need to be that the data has been received on the remote symm.

Synchronous mode

Synchronous mode basically means that the remote symm must have the I/O in cache before the calling application receives the acknowledgement. Depending on distance between symms, this may have a significant impact on performance which is the main reason that EMC suggests this set up in a campus (damn near colocated) environment only.

If you're particularly paranoid about ensuring data on one symm is on the other, you can enable theDomino effect (I think you're supposed to be hearing suspense music in the background right about now...). Basically, the

domino effect ensures that the R1 devices will become "not ready" if the R2 devices can't be reached for any reason - effectively shutting down the filesystem(s) untilthe problem can be resolved.

Semi-synchronous mode

In semi-synchronous mode, the R2 devices are one (or less) write I/O out of sync with their R1 device counterparts. The application gets the acknowledgement as soon as the first write I/O gets to the local cache. The second I/O isn't acknowledged until the first is in the remote cache. This should speed up the application over the synchronous mode. It does, however, mean that your data might be a bit out of sync with the local symm.

Adaptive Copy-Write Pending

This mode copies data over to the R2 volumes as quickly as it can; however, doesn't delay the acknowledgement to the application. This mode is useful where some data loss is permissable andlocal performance is paramount.

There's a configurable skew parameter that lists the maximum allowable dirty tracks. Once that number of pending I/Os is reached, the system switches to the predetermined mode (probably semi-synchronous) until the remote symm catches up. At that point, it switches back to adaptive copy-write pending mode.



About Me

My photo
Sr. Solutions Architect; Expertise: - Cloud Design & Architect - Data Center Consolidation - DC/Storage Virtualization - Technology Refresh - Data Migration - SAN Refresh - Data Center Architecture More info:- diwakar@emcstorageinfo.com
Blog Disclaimer: “The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.”
EMC Storage Product Knowledge Sharing