The Clariion Formats the disks in Blocks. Each Block written out to the disk is 520 bytes in size. Of the 520 bytes, 512 bytes is used to store the actual DATA written to the block. The remaining 8 bytes per block is used by the Clariion to store System Information, such as a Timestamp, Parity Information, Checksum Data.
Element Size – The Element Size of a disk is determined when a LUN is bound to the RAID Group. In previous versions of Navisphere, a user could configure the Element Size from 4 blocks per disk 256 blocks per disk. Now, the default Element Size in Navisphere is 128. This

means that the Clariion will write 128 blocks of data to one physical disk in the RAID Group before moving to the next disk in the RAID Group and write another 128 blocks to that disk, so on and so on.
Chunk Size – The Chunk Size is the amount of Data the Clariion writes to a physical disk at a time. The Chunk Size is calculated by multiplying the Element Size by the amount of Data per block written by the Clariion.
128 blocks x 512 bytes of Data per block = 65,536 bytes of Data per Disk. That is equal to 64 KB. So, the Chunk Size, the amount of Data the Clariion writes to a single disk, before writing to the next disk in the RAID Group is 64 KB.

In order to use the control functions of Solutions Enabler, you must create device groups and add/associate Symmetrix devices with the group. The following example shows how to create a device group, add a standard device to it and associate two BCV devices to the group.
The following commands will create a device group using the default type (regular). Next we will add a device to the device group and assign it a logical name. Then we associate two BCV devices with the device group so we can switch back and forth with the BCV devices.

symdg create mygroupsymld -g mygroup add dev 000 STD000
symbcv -g mygroup associate dev 110 BCV000
symbcv -g mygroup associate dev 111 BCV001

NOTE: At this point you have only added/associated devices with a device group. These actions do not in any way describe which devices should actually be paired. This may be confusing as the documentation is not very explicit. The fact is that the symmetrix may already have BCV pair information about these devices depending on how they were used in the past.
Now issue the commands to define the STD/BCV pair and actually synchronize the pair with a full establish.

symmir -g mygroup -full establish STD000 BCV dev 110
or
symmir -g mygroup -full establish STD000 BCV ld BCV000

This explicit definition of the STD device and the particular BCV device will cause any existing pair information to be disregarded and will use this new information to create a pair. This is

comparable to the older TimeFinder Command Line Interface "bcv -f filename" where the file "filename" consisted on one line entries pairing STD devices with BCV devices. And finally, split this TimeFinder pair and synchronize the STD device with a different BCV device.
symmir -g mygroup split
symmir -g mygroup -full establish STD000 BCV dev 111

Another method to establish pairs, using the "-exact" option [Available in V3.2-73-06 and higher]The -full -exact options on the symmir command instructs SYMCLI to define the STD/BCV pairs in the same order they were entered into the device group.

symdg create mygroupsymld -g mygroup add dev 000 STD000
symld -g mygroup add dev 001 STD001
symbcv -g mygroup associate dev 110 BCV000
symbcv -g mygroup associate dev 111 BCV001
symmir -g mygroup -full -exact establish

This will pair the first STD device (STD000) with the first BCV (BCV000) entered into the device group, and pair the second STD device (STD001) with the second BCV (BCV001) entered into the device group.

EMC has traditionally protected failing drives using Dynamic Spares. A Dynamic Spare will take a copy of the data from a failing drive and act as a temporary mirror of the data until the drive can be replaced. The data will then be copied back to the new drive at which point the Dynamic Spare will return the spare drive pool. Two copy processes are required one to copy data to the Dynamic Spare and one to copy data back to the new drive. The copy process may impact performance and, since the Dynamic Spare takes a mirror position, can affect other dynamic devices such as BCVs.


Permanent Sparing overcomes many of these limitations by copying the data only once to a drive which has replaced the failing drive taking its original mirror position. Since the Permanent Spare

does not take an additional mirror position it will not affect Timfinder Mirror operations.

Permanent Sparing in some instances uses Dynamic Sparing as an interim step. This will be described together with the requirements for Permanent Sparing.


- Permanent Sparing is supported on all flavours of DMX.

- A Permanent Spare replaces the original failing drive and will take its original mirror position.

- Required sufficient drives of the same type as the failing drives to be installed and configured as Spares.

- Needs to enabled in the binfile (can be done via Symcli).

- Permanent Sparing will alter the back end bin and cannot be initiated when there is a Configuration Lock on the box.

- When enabled reduces the need for a CE to attend site for drive changes since drives can be replaced in batches.

- Permanent Spare will follow all configuration rules to ensure both performance and redundancy.

- Enginuity Code level must support the feature.


The following Ten Rules provided by Brocade for zoning:
Rule 1: Type of Zoning (Hard, Soft, Hardware Enforced, Soft Porting) – If security is a
priority, then a Hard Zone-based architecture coupled with Hardware Enforcement is
recommended

Rule 2: Use of Aliases – Aliases are optional with zoning. Using aliases should force some
structure when defining your zones. Aliases will also aid future administrators of the zoned
fabric. Structure is the word that comes to mind here.

Rule 3: Does the site need an extra level of security that Secure Fabric OS provides? – Add
Secure Fabric OS into the Zone Architecture if extra security is required.

Rule 4: From where will the fabric be managed? – If a SilkWorm 12000 is part of the fabric,
then the user should use it to administer zoning within the Fabric

Rule 5: Interoperability Fabric – If the fabric includes a SilkWorm 12000 and the user needs to
support a third-party switch product, then he will only be able to do WWN zoning, no
QuickLoop etc.

Rule 6: Is the fabric going to have QLFA or QL in it? – If the user is running Brocade Fabric OS

v4.0, then there are a couple things to consider before creating and setting up QLFA zones:
QuickLoop Zoning
QL/QL zones cannot run on switches running Brocade Fabric OS v4.0. Brocade Fabric
OS v4.0 can still manage (create, remove, update) QL zones on any non-v4.0 switch.
QuickLoop Fabric Assist
Brocade Fabric OS v4.0 cannot have a Fabric Assist host directly connected to it.
However, Brocade Fabric OS v4.0 can still be part of a Fabric Assist zone if a Fabric
Assist host is connected to a non-v4.0 switch.

Rule 7: Testing a (new) zone configuration. – Before implementing a zone the user should run
the Zone Analyzer and isolate any possible problems. This is especially useful, as fabrics
increase in size.

Rule 8: Prep work needed before enabling/changing a zone configuration. – Before enabling or
changing a fabric configuration, the user should verify that no one is issuing I/O in the zone that will change. This can have a serious impact within the fabric like databases breaking, node
panics etc. This goes the same for disk(s) that are mounted. If the user changes a zone, and a
node is mounting the storage in question, it may “vanish” due to the zone change. This may
cause nodes to panic, applications to break etc. Changes to the zone should be done during
preventative maintenance. Most sites have an allocated time each day to perform maintenance
work.

Rule 9: Potential post work requirements after enabling/changing a zone configuration. – After
changing or enabling a zone configuration, the user should confirm that nodes and storage are
able to see and access one another. Depending on the platform, the user may need to reboot one
or more nodes in the fabric with the new changes to the zone.

Rule 10: LUN masking in general. – LUN Masking should be used in conjunction with fabric
zoning for maximum effectiveness.

An LSAN is a Logical Storage Area Network that spans multiple Physical fabrics and allows specified devices from these autonomous fabrics to communicate with each other without merging the physical fabrics. In other word, A logical network that spans multiple physical fabrics. It allows specified devices from the autonomous fabric to communicate with each other using a FC router without merging the physical fabrics.
- A LSAN zone is a traditional zone with a special naming convention.
- Zone names must start with “LSAN_” or “lsan_” or “LSan_”
- LSAN zones are architecturally comaptiable with FOS and M-EOS
- FC Router uses LSAN zones to determine which devices need to be exported/imported into which routed fabrics.
- LSAN zones must be configured in all fabrics where the shared physical devices exist.
- The router performs zoning enforcement for edge fabrics at the ingress Router EX Port.

LSAN Implementation Rules:

- LSAN zone members must be defined using the device Port WWN, Zone members, including aliases, need to be defined using WWPN.
- LSAN zone name on the routed fabrics do not need to be identical, but is recommended for

ease of administration and troubleshooting.
- LSAN zones in routed fabrics sharing devices are not required to have identical membership. Shared devices must exist in both fabrics LSAN zones.

Once the LSAN zones are enabled, you will be able to check status of LSAN zones and members from the FC Router using the command lsanzoneshow –s

Router:admin>lsanzoneshow –s
Fabric ID: 1 Zone Name: lsan_zone1
10:00:00:00:98:23:12:11 Exist
10:00:00:00:98:23:ab:cd Imported


Fabric ID: 2 Zone Name: lsan_zone1
10:00:00:00:98:23:12:11 Imported
10:00:00:00:98:23:ab:cd Exist

The output reveals what devices are “Exported (Exist) and Imported from all the routed fabrics.

PowerPath Migration Enabler is a host-based software product that enables other technologies, such as array-based replication and virtualization, to eliminate -application downtime during data migrations or virtualization implementations. PowerPath Migration Enabler allows EMC Open Replicator for Symmetrix and EMC Invista customers to eliminate downtime during data migrations from EMC storage to Symmetrix, and during virtualized deployments to Invista.. PowerPath Migration Enabler—which leverages the same underlying technology as PowerPath—keeps arrays in synch during Open Replicator for Symmetrix data migrations, with minimal impact to host resources. It also enables seamless deployment of Invista virtualized environments by encapsulating (or bringing under its control)

the volumes that will be virtualized. In addition,
EMC PowerPath boasts the following benefits:

PowerPath Migration Enabler with Open Replicator for Symmetrix:
¨ Eliminates planned downtime
¨ Provides flexibility in time to perform migration
PowerPath Migration Enabler with EMC Invista:
¨ Eliminates planned downtime
¨ Eliminates need for data copy and additional storage for data migration
¨ I/O redirection allows Administrators to “preview” deployment without committing to redirection

EMC SRDF Mode

Posted by Diwakar ADD COMMENTS

Conceptually, even operationally, SRDF is very similar to Timefinder. About the only difference is that SRDF works across Symms; while Timefinder works internally to one Symm.That difference, intersymm vs intrasym, means that SRDF operations can cover quite a bit of ground geographically. With the advent of geographically separated symms, the integrity of the data from one symm to the other becomes a concern. EMC has a number of operational modes in which the SRDF operates. The choice between these operational modes is a balancing act between how quickly the calling application gets an acknowledgement back versus how sure you need to be that the data has been received on the remote symm.

Synchronous mode

Synchronous mode basically means that the remote symm must have the I/O in cache before the calling application receives the acknowledgement. Depending on distance between symms, this may have a significant impact on performance which is the main reason that EMC suggests this set up in a campus (damn near colocated) environment only.

If you're particularly paranoid about ensuring data on one symm is on the other, you can enable theDomino effect (I think you're supposed to be hearing suspense music in the background right about now...). Basically, the

domino effect ensures that the R1 devices will become "not ready" if the R2 devices can't be reached for any reason - effectively shutting down the filesystem(s) untilthe problem can be resolved.

Semi-synchronous mode

In semi-synchronous mode, the R2 devices are one (or less) write I/O out of sync with their R1 device counterparts. The application gets the acknowledgement as soon as the first write I/O gets to the local cache. The second I/O isn't acknowledged until the first is in the remote cache. This should speed up the application over the synchronous mode. It does, however, mean that your data might be a bit out of sync with the local symm.

Adaptive Copy-Write Pending

This mode copies data over to the R2 volumes as quickly as it can; however, doesn't delay the acknowledgement to the application. This mode is useful where some data loss is permissable andlocal performance is paramount.

There's a configurable skew parameter that lists the maximum allowable dirty tracks. Once that number of pending I/Os is reached, the system switches to the predetermined mode (probably semi-synchronous) until the remote symm catches up. At that point, it switches back to adaptive copy-write pending mode.



About Me

My photo
Sr. Solutions Architect; Expertise: - Cloud Design & Architect - Data Center Consolidation - DC/Storage Virtualization - Technology Refresh - Data Migration - SAN Refresh - Data Center Architecture More info:- diwakar@emcstorageinfo.com
Blog Disclaimer: “The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.”
EMC Storage Product Knowledge Sharing