means that the Clariion will write 128 blocks of data to one physical disk in the RAID Group before moving to the next disk in the RAID Group and write another 128 blocks to that disk, so on and so on.
Tier "0" is not new in storage market but for implementation purposes it has been difficult to accommodate because it requires best performance and lowest latency. Enterprise Flash disks (Solid State Disks) capable to meet this requirement. It is possible to get more performance for company most critical applications. The performance can be gained through using Flash drives supported in VMAX and DMX-4 systems. Read More →
EMC has traditionally protected failing drives using Dynamic Spares. A Dynamic Spare will take a copy of the data from a failing drive and act as a temporary mirror of the data until the drive can be replaced. The data will then be copied back to the new drive at which point the Dynamic Spare will return the spare drive pool. Two copy processes are required one to copy data to the Dynamic Spare and one to copy data back to the new drive. The copy process may impact performance and, since the Dynamic Spare takes a mirror position, can affect other dynamic devices such as BCVs.
Permanent Sparing overcomes many of these limitations by copying the data only once to a drive which has replaced the failing drive taking its original mirror position. Since the Permanent Spare
does not take an additional mirror position it will not affect Timfinder Mirror operations.
Permanent Sparing in some instances uses Dynamic Sparing as an interim step. This will be described together with the requirements for Permanent Sparing.
- Permanent Sparing is supported on all flavours of DMX.
- A Permanent Spare replaces the original failing drive and will take its original mirror position.
- Required sufficient drives of the same type as the failing drives to be installed and configured as Spares.
- Needs to enabled in the binfile (can be done via Symcli).
- Permanent Sparing will alter the back end bin and cannot be initiated when there is a Configuration Lock on the box.
- When enabled reduces the need for a CE to attend site for drive changes since drives can be replaced in batches.
- Permanent Spare will follow all configuration rules to ensure both performance and redundancy.
- Enginuity Code level must support the feature.
The following Ten Rules provided by Brocade for zoning:
Rule 1: Type of Zoning (Hard, Soft, Hardware Enforced, Soft Porting) – If security is a
priority, then a Hard Zone-based architecture coupled with Hardware Enforcement is
recommended
Rule 2: Use of Aliases – Aliases are optional with zoning. Using aliases should force some
structure when defining your zones. Aliases will also aid future administrators of the zoned
fabric. Structure is the word that comes to mind here.
Rule 3: Does the site need an extra level of security that Secure Fabric OS provides? – Add
Secure Fabric OS into the Zone Architecture if extra security is required.
Rule 4: From where will the fabric be managed? – If a SilkWorm 12000 is part of the fabric,
then the user should use it to administer zoning within the Fabric
Rule 5: Interoperability Fabric – If the fabric includes a SilkWorm 12000 and the user needs to
support a third-party switch product, then he will only be able to do WWN zoning, no
QuickLoop etc.
Rule 6: Is the fabric going to have QLFA or QL in it? – If the user is running Brocade Fabric OS
v4.0, then there are a couple things to consider before creating and setting up QLFA zones:
• QuickLoop Zoning
QL/QL zones cannot run on switches running Brocade Fabric OS v4.0. Brocade Fabric
OS v4.0 can still manage (create, remove, update) QL zones on any non-v4.0 switch.
• QuickLoop Fabric Assist
Brocade Fabric OS v4.0 cannot have a Fabric Assist host directly connected to it.
However, Brocade Fabric OS v4.0 can still be part of a Fabric Assist zone if a Fabric
Assist host is connected to a non-v4.0 switch.
Rule 7: Testing a (new) zone configuration. – Before implementing a zone the user should run
the Zone Analyzer and isolate any possible problems. This is especially useful, as fabrics
increase in size.
Rule 8: Prep work needed before enabling/changing a zone configuration. – Before enabling or
changing a fabric configuration, the user should verify that no one is issuing I/O in the zone that will change. This can have a serious impact within the fabric like databases breaking, node
panics etc. This goes the same for disk(s) that are mounted. If the user changes a zone, and a
node is mounting the storage in question, it may “vanish” due to the zone change. This may
cause nodes to panic, applications to break etc. Changes to the zone should be done during
preventative maintenance. Most sites have an allocated time each day to perform maintenance
work.
Rule 9: Potential post work requirements after enabling/changing a zone configuration. – After
changing or enabling a zone configuration, the user should confirm that nodes and storage are
able to see and access one another. Depending on the platform, the user may need to reboot one
or more nodes in the fabric with the new changes to the zone.
Rule 10: LUN masking in general. – LUN Masking should be used in conjunction with fabric
zoning for maximum effectiveness.
Conceptually, even operationally, SRDF is very similar to Timefinder. About the only difference is that SRDF works across Symms; while Timefinder works internally to one Symm.That difference, intersymm vs intrasym, means that SRDF operations can cover quite a bit of ground geographically. With the advent of geographically separated symms, the integrity of the data from one symm to the other becomes a concern. EMC has a number of operational modes in which the SRDF operates. The choice between these operational modes is a balancing act between how quickly the calling application gets an acknowledgement back versus how sure you need to be that the data has been received on the remote symm.Synchronous mode
Synchronous mode basically means that the remote symm must have the I/O in cache before the calling application receives the acknowledgement. Depending on distance between symms, this may have a significant impact on performance which is the main reason that EMC suggests this set up in a campus (damn near colocated) environment only.
If you're particularly paranoid about ensuring data on one symm is on the other, you can enable theDomino effect (I think you're supposed to be hearing suspense music in the background right about now...). Basically, the
domino effect ensures that the R1 devices will become "not ready" if the R2 devices can't be reached for any reason - effectively shutting down the filesystem(s) untilthe problem can be resolved.
Semi-synchronous mode
In semi-synchronous mode, the R2 devices are one (or less) write I/O out of sync with their R1 device counterparts. The application gets the acknowledgement as soon as the first write I/O gets to the local cache. The second I/O isn't acknowledged until the first is in the remote cache. This should speed up the application over the synchronous mode. It does, however, mean that your data might be a bit out of sync with the local symm.
Adaptive Copy-Write Pending
This mode copies data over to the R2 volumes as quickly as it can; however, doesn't delay the acknowledgement to the application. This mode is useful where some data loss is permissable andlocal performance is paramount.
There's a configurable skew parameter that lists the maximum allowable dirty tracks. Once that number of pending I/Os is reached, the system switches to the predetermined mode (probably semi-synchronous) until the remote symm catches up. At that point, it switches back to adaptive copy-write pending mode.