Showing posts with label EMC. Show all posts
Showing posts with label EMC. Show all posts


The Clariion Formats the disks in Blocks. Each Block written out to the disk is 520 bytes in size. Of the 520 bytes, 512 bytes is used to store the actual DATA written to the block. The remaining 8 bytes per block is used by the Clariion to store System Information, such as a Timestamp, Parity Information, Checksum Data.
Element Size – The Element Size of a disk is determined when a LUN is bound to the RAID Group. In previous versions of Navisphere, a user could configure the Element Size from 4 blocks per disk 256 blocks per disk. Now, the default Element Size in Navisphere is 128. This

means that the Clariion will write 128 blocks of data to one physical disk in the RAID Group before moving to the next disk in the RAID Group and write another 128 blocks to that disk, so on and so on.
Chunk Size – The Chunk Size is the amount of Data the Clariion writes to a physical disk at a time. The Chunk Size is calculated by multiplying the Element Size by the amount of Data per block written by the Clariion.
128 blocks x 512 bytes of Data per block = 65,536 bytes of Data per Disk. That is equal to 64 KB. So, the Chunk Size, the amount of Data the Clariion writes to a single disk, before writing to the next disk in the RAID Group is 64 KB.

EMC has traditionally protected failing drives using Dynamic Spares. A Dynamic Spare will take a copy of the data from a failing drive and act as a temporary mirror of the data until the drive can be replaced. The data will then be copied back to the new drive at which point the Dynamic Spare will return the spare drive pool. Two copy processes are required one to copy data to the Dynamic Spare and one to copy data back to the new drive. The copy process may impact performance and, since the Dynamic Spare takes a mirror position, can affect other dynamic devices such as BCVs.


Permanent Sparing overcomes many of these limitations by copying the data only once to a drive which has replaced the failing drive taking its original mirror position. Since the Permanent Spare

does not take an additional mirror position it will not affect Timfinder Mirror operations.

Permanent Sparing in some instances uses Dynamic Sparing as an interim step. This will be described together with the requirements for Permanent Sparing.


- Permanent Sparing is supported on all flavours of DMX.

- A Permanent Spare replaces the original failing drive and will take its original mirror position.

- Required sufficient drives of the same type as the failing drives to be installed and configured as Spares.

- Needs to enabled in the binfile (can be done via Symcli).

- Permanent Sparing will alter the back end bin and cannot be initiated when there is a Configuration Lock on the box.

- When enabled reduces the need for a CE to attend site for drive changes since drives can be replaced in batches.

- Permanent Spare will follow all configuration rules to ensure both performance and redundancy.

- Enginuity Code level must support the feature.


PowerPath Migration Enabler is a host-based software product that enables other technologies, such as array-based replication and virtualization, to eliminate -application downtime during data migrations or virtualization implementations. PowerPath Migration Enabler allows EMC Open Replicator for Symmetrix and EMC Invista customers to eliminate downtime during data migrations from EMC storage to Symmetrix, and during virtualized deployments to Invista.. PowerPath Migration Enabler—which leverages the same underlying technology as PowerPath—keeps arrays in synch during Open Replicator for Symmetrix data migrations, with minimal impact to host resources. It also enables seamless deployment of Invista virtualized environments by encapsulating (or bringing under its control)

the volumes that will be virtualized. In addition,
EMC PowerPath boasts the following benefits:

PowerPath Migration Enabler with Open Replicator for Symmetrix:
¨ Eliminates planned downtime
¨ Provides flexibility in time to perform migration
PowerPath Migration Enabler with EMC Invista:
¨ Eliminates planned downtime
¨ Eliminates need for data copy and additional storage for data migration
¨ I/O redirection allows Administrators to “preview” deployment without committing to redirection

EMC SRDF Mode

Posted by Diwakar ADD COMMENTS

Conceptually, even operationally, SRDF is very similar to Timefinder. About the only difference is that SRDF works across Symms; while Timefinder works internally to one Symm.That difference, intersymm vs intrasym, means that SRDF operations can cover quite a bit of ground geographically. With the advent of geographically separated symms, the integrity of the data from one symm to the other becomes a concern. EMC has a number of operational modes in which the SRDF operates. The choice between these operational modes is a balancing act between how quickly the calling application gets an acknowledgement back versus how sure you need to be that the data has been received on the remote symm.

Synchronous mode

Synchronous mode basically means that the remote symm must have the I/O in cache before the calling application receives the acknowledgement. Depending on distance between symms, this may have a significant impact on performance which is the main reason that EMC suggests this set up in a campus (damn near colocated) environment only.

If you're particularly paranoid about ensuring data on one symm is on the other, you can enable theDomino effect (I think you're supposed to be hearing suspense music in the background right about now...). Basically, the

domino effect ensures that the R1 devices will become "not ready" if the R2 devices can't be reached for any reason - effectively shutting down the filesystem(s) untilthe problem can be resolved.

Semi-synchronous mode

In semi-synchronous mode, the R2 devices are one (or less) write I/O out of sync with their R1 device counterparts. The application gets the acknowledgement as soon as the first write I/O gets to the local cache. The second I/O isn't acknowledged until the first is in the remote cache. This should speed up the application over the synchronous mode. It does, however, mean that your data might be a bit out of sync with the local symm.

Adaptive Copy-Write Pending

This mode copies data over to the R2 volumes as quickly as it can; however, doesn't delay the acknowledgement to the application. This mode is useful where some data loss is permissable andlocal performance is paramount.

There's a configurable skew parameter that lists the maximum allowable dirty tracks. Once that number of pending I/Os is reached, the system switches to the predetermined mode (probably semi-synchronous) until the remote symm catches up. At that point, it switches back to adaptive copy-write pending mode.



Flash Drives in DMX

Posted by Diwakar ADD COMMENTS

EMC announce Flash Drive support in CX-4 (A new generation CLARiiON) and DMX-4. EMC started supporting TIER 0 with Flash Drive. Flash drives provide maximum performance for latency sensitive applications. Flash drives, also referred to as solid state drives (SSD), contain no moving parts and appear as standard Fibre Channel drives to existing Symmetrix management tools, allowing administrators to manage tier 0 without special processes or custom tools. Tier 0 Flash storage is ideally suited for applications with high transaction rates and those requiring the fastest possible retrieval and storage of data, such as currency exchange and electronic trading systems, or real time data feed processing.

A Symmetrix DMX-4 with Flash drives can deliver single-millisecond application response times and up to 30 times more IOPS than traditional 15,000 rpm Fibre Channel disk drives. Additionally, because there are no mechanical components, Flash drives require up to 98 percent less energy per IOPS than traditional disk drives. Database acceleration is one example for Flash drive performance impact. Flash drive storage can be used to accelerate online transaction processing (OLTP), accelerating performance with large indices and frequently accessed database tables. Examples of OLTP applications include Oracle and DB2 databases, and SAP R/3. Flash drives can also improve performance in batch processing and shorten batch processing windows.

Flash drive performance will help any application that needs the lowest latency possible. Examples include

· Algorithmic trading

· Currency exchange and arbitrage

· Trade optimization

· Realtime data/feed processing

· Contextual web advertising

· Other realtime transaction systems

· Data modeling

Flash drives are most beneficial with random read misses (RRM). If the RRM percentage is low, Flash drives may show less benefit since writes and sequential reads/writes already leverage Symmetrix cache to achieve the lowest possible response times. The local EMC SPEED Guru can do a performance analysis of the current workload to determine how the customer may benefit from Flash drives. Write response times of long distance SRDF/S replication could be high relative to response times from Flash drives. Flash drives cannot help with reducing response time due to long distance replication. However, read misses still enjoy low response times.

Flash drives can be used as clone source and target volumes. Flash drives can be used as SNAP source volumes. Virtual LUN Migration supports migrating volumes to and from Flash drives. Flash drives can be used with SRDF/s and SRDF/A. Metavolumes can be configured on Flash drives as long as all of the logicals in the metagroup are on Flash drives.

Limitations and Restrictions of Flash drives:

Due to the new nature of the technology, not all Symmetrix functions are currently supported on Flash drives. The following is a list of the current limitations and restrictions of Flash drives.

Delta Set Extension and SNAP pools cannot be configured on Flash drives.
• RAID 1 and RAID 6 protection, as well as unprotected volumes, are currently not supported with Flash drives.
• TimeFinder/Mirror is currently not supported with Flash drives.
• iSeries volumes currently cannot be configured on Flash drives.
• Open Replicator of volumes configured on Flash drives is not currently supported.
• Secure Erase of Flash drives is not currently supported.
• Compatible Flash for z/OS and Compatible Native Flash for z/OS are not currently supported.
• TPF is not currently supported.

AX150 :-Dual storage processor enclosure with Fibre-Channel interface to host and SATA-2 disks. AX150i :-Dual storage processor enclosure with iSCSI interface to host and SATA-2 disks. AX100 :- Dual storage processor enclosure with Fibre-Channel interface to host and SATA-1 disks.
AX100SC
Single storage processor enclosure with Fibre-Channel interface to host and SATA-1 disks.
AX100i
Dual storage processor enclosure with iSCSI interface to host and SATA-1 disks.
AX100SCi
Single storage processor enclosure with iSCSI interface to host and SATA-1 disks.
CX3-80
SPE2 - Dual storage processor (SP) enclosure with four Fibre-Channel front-end ports and four back-end ports per SP.
CX3-40
SP3 - Dual storage processor (SP) enclosure with two Fibre Channel front-end ports and two back-end ports per SP.
CX3-40f
SP3 - Dual storage processor (SP) enclosure with four Fibre Channel front -end ports and four back-end ports per SP
CX3-40c
SP3 - Dual storage processor (SP) enclosure with four iSCSI front-end ports, two Fibre Channel front -end ports, and two back-end ports per SP.
CX3-20
SP3 - Dual storage processor (SP) enclosure with two Fibre Channel front-end ports and a single back-end port per SP.
CX3-20f
SP3 - Dual storage processor (SP) enclosure with six Fibre Channel front-end ports, and a single back-end port per SP.
CX3-20c
SP3 - Dual storage processor (SP) enclosure with four iSCSI front-end ports, two Fibre Channel front-end ports, and a single back-end port per SP.
CX600, CX700
SPE - based storage system with model CX600/CX700 SP, Fibre-Channel interface to host, and Fibre Channel disks
CX500, CX400, CX300, CX200
DPE2 - based storage system with model CX500/CX400/CX300/CX200 SP, Fibre-Channel interface to host, and Fibre Channel disks.
CX2000LC
DPE2- based storage system with one model CX200 SP, one power supply (no SPS),Fibre-Channel interface to host, and Fibre Channel disks.
C1000 Series
10-slot storage system with SCSI interface to host and SCSI disks
C1900 Series
Rugged 10-slot storage system with SCSI interface to host and SCSI disks
C2x00 Series
20-slot storage system with SCSI interface to host and SCSI disks
C3x00 Series
30-slot storage system with SCSI or Fibre Channel interface to host and SCSI disks
FC50xx Series
DAE with Fibre Channel interface to host and Fibre Channel disks
FC5000 Series
JBOD with Fibre Channel interface to host and Fibre Channel disks
FC5200/5300 Series
iDAE -based storage system with model 5200 SP, Fibre Channel interface to host, and Fibre channel disks
FC5400/5500 Series
DPE -based storage system with model 5400 SP, Fibre Channel interface to host, and Fibre channel disks
FC5600/5700 Series
DPE -based storage system with model 5600 SP, Fibre Channel interface to host, and Fibre Channel disks
FC4300/4500 Series
DPE -based storage system with either model 4300 SP or model 4500 SP, Fibre Channel interface to host, and Fibre Channel disks
FC4700 Series
DPE -based storage system with model 4700 SP, Fibre Channel interface to host, and Fibre Channel disks
IP4700 Series
Rackmount Network-Attached storage system with 4 Fibre Channel host ports and Fibre Channel disks.

Posted by Diwakar ADD COMMENTS


EMC on Root Cause Analysis Moves into the Storage Domain

Here we are looking at only three possible ways in which a host can be attached to a Clariion. From talking with customers in class, these seem to be the three most common ways in which the hosts are attached.
The key points to the slide are:1. The LUN, the disk space that is created on the Clariion, that will eventually be assigned to the host, is owned by one of the Storage Processors, not both.2. The host needs to be physically connected via fibre, either directly attached, or through a switch.




CONFIGURATION ONE:

In Configuration One, we see a host that has a single Host Bus Adapter (HBA), attached to a single switch. From the Switch, the cables run once to SP A, and once to SP B. The reason this host is zoned and cabled to both SPs is in the event of a LUN trespass. In Configuration One, if SP A would go down, reboot, etc...the LUN would trespass to SP B. Because the host is cabled and zoned to SP B, the host would still have access to the LUN via SP B. The problem with this configuration is the list of Single Point(s) of Failure. In the event that you would lose the HBA, the Switch, or a connection between the HBA and the Switch (the fibre, GBIC on the switch, etc...), you lose access to the Clariion, thereby losing access to your LUNs.

CONFIGURATION TWO:

In Configuration Two, we have a host with two Host Bus Adapters. HBA1 is attached to a switch, and from there, the host is zoned and cabled to SP B. HBA2 is attached to a separate switch, and from there , the host is zoned and cabled to SP A. The path from HBA2 to SP A, is shown as the "Active Path" because that is the path data will leave the host from to get to the LUN, as it is owned by SP A. The path from HBA1 to SP B, is shown as the "Standby Path" because the LUN doesn't belong to SP B. The only time that the host would use the "Standby Path" is in the event of a LUN Trespass. The advantage of using Configuration Two over Configuration One, is that there is no single point of failure.
Now, let's say we install PowerPath on the host. With PowerPath, the host has the potential to do two things. First, it allows the host to initiate the Trespass of the LUN. With PowerPath on the host, if there is a path failure (HBA gone bad, switch down, etc...), the host will issue the trespass command to the SPs, and the SPs will move the LUN, temporarily, from SP A to SP B. The second advantage of PowerPath on a host, is that it allows the host to 'Load Balance' data from the host. Again, this has nothing to do with load balancing the Clariion SPs. We will get there later. However, in Configuration Two, we only have one connection from the host to SP A. This is the only path the host has and will use to move data for this LUN.

CONFIGURATION THREE:

In Configuration Three, hardware wise, we have the same as Configuration Two. However, notice that we have a few more cables running from the switches to the Storage Processors. HBA1 is into the switch and zoned and cabled to SP A and SP B. HBA2 is into the switch and zoned and cabled to SP A and SP B. What this does now is to give HBA1 and HBA2 an 'Active Path' to SP A, and HBA1 and HBA2, 'Standby Paths' to SP B. Because of this, the Host now can route data down each active path to the Clariion, allowing the host "Load Balancing" capabilities. Also, the only time a LUN should trespass from one SP to another is if there is a Storage Processor failure. If the host were to lose HBA1, it still has HBA2 with an active path to the Clariion. The same goes for a switch failure and connection failure.

The purpose of a MetaLUN is that a Clariion can grow the size of a LUN on the ‘fly’. Let’s say that a host is running out of space on a LUN. From Navisphere, we can “Expand” a LUN by adding more LUNs to the LUN that the host has access to. To the host, we are not adding more LUNs. All the host is going to see is that the LUN has grown in size. We will explain later how to make space available to the host.There are two types of MetaLUNs, Concatenated and Striped. Each has their advantages and disadvantages, but the end result which ever you use, is that you are growing, “expanding” a LUN. A Concatenated MetaLUN is advantageous because it allows a LUN to be “grown” quickly and the space made available to the host rather quickly as well. The other advantage is that the Component LUNs that are added to the LUN assigned to the Host can be of a different RAID type and of a different size. The host writes to Cache on the Storage Processor, the Storage Processor then flushes out to the disk. With a Concatenated MetaLUN, the Clariion only writes to one LUN at a time. The Clariion is going to write to LUN 6 first. Once the Clariion fills LUN 6 with data, it then begins writing to the next LUN in the MetaLUN, which is LUN 23. The Clariion will continue writing to LUN 23 until it is full, then write to LUN 73. Because of this writing process, there is no performance gain. The Clariion is still only writing to one LUN at a time.A Striped MetaLUN is advantageous because if setup properly could enhance performance as well as protection. Let’s look first at how the MetaLUN is setup and written to, and how performance can be gained. With the Striped MetaLUN, the Clariion writes to all LUNs that make up the MetaLUN, not just one at a time. The advantage of this is more spindles/disks. The Clariion will stripe the data across all of the LUNs in the MetaLUN, and if the LUNs are on different Raid Groups, on different Buses, this will allow the application to be striped across fifteen (15) disks, and in the example above, three back-end buses of the Clariion. The workload of the application is being spread out across the back-end of the Clariion, thereby possibly increasing speed. As illustrated above, the first Data Stripe (Data Stripe 1) that the Clariion writes out to disk will go across the five disks on Raid Group 5 where LUN 6 lives. The next stripe of data (Data Stripe 2), is striped across the five disks that make up RAID Group 10 where LUN23 lives. And finally, the third stripe of data (Data Stripe 3) is striped across the five disks that make up Raid Group 20 where LUN 73 lives. And then the Clariion starts the process all over again with LUN6, then LUN 23, then LUN 73. This gives the application 15 disks to be spread across, and three buses. As for data protection, this would be similar to building a 15 disk raid group. The problem with a 15 disk raid group is that if one disk where to fail, it would take a considerable amount of time to rebuild the failed disk from the other 14 disks. Also, if there were two disks to fail in this raid group, and it was RAID 5, data would be lost. In the drawing above, each of the LUNs is on a different RAID group. That would mean that we could lose a disk in RAID Group 5, RAID Group 10, and RAID Group 20 at the same time, and still have access to the data. The other advantage of this configuration is that the rebuilds are occurring within each individual RAID Group. Rebuilding from four disks is going to be much faster than the 14 disks in a fifteen disk RAID Group.The disadvantage of using a Striped MetaLUN is that it takes time to create. When a component LUN is added to the MetaLUN, the Clariion must restripe the data across the existing LUN(s) and the new LUN. This takes time and resources of the Clariion. There may be a performance impact while a Striped MetaLUN is re-striping the data. Also, the space is not available to the host until the MetaLUN has completed re-striping the data.

What is MetaLUN?

Posted by Diwakar ADD COMMENTS

A metaLUN is a type of LUN whose maximum capacity can be the combined capacities of all the LUNs that compose it. The metaLUN feature lets you dynamically expand the capacity of a single LUN (base LUN) into a larger unit called a metaLUN. You do this by adding LUNs to the base LUN. You can also add LUNs to a metaLUN to further increase its capacity. Like a LUN, a metaLUN can belong to a Storage Group, and can participate in SnapView, MirrorView and SAN copy sessions. MetaLUNs are supported only on CX-Series storage systems.
A metaLUN may include multiple sets of LUNs and each set of LUNs is called a component. The LUNs within a component are striped together and are independent of other LUNs in the metaLUN. Any data that gets written to a metaLUN component is striped across all the LUNs in the component. The first component of any metaLUN always includes the base LUN. The number of components within a metaLUN and the number of LUNs within a component depend on the storage system type. The following table shows this relationship:
Storage System Type LUNs Per metaLUN Component Components Per metaLUN
CX700, CX600 32 16
CX500, CX400 32 8
CX300, CX200 16 8
You can expand a LUN or metaLUN in two ways — stripe expansion or concatenate expansion. A stripe expansion takes the existing data on the LUN or metaLUN you are expanding, and restripes (redistributes) it across the existing LUNs and the new LUNs you are adding. The stripe expansion may take a long time to complete. A concatenate expansion creates a new metaLUN component that includes the new expansion LUNs, and appends this component to the existing LUN or metaLUN as a single, separate, striped component. There is no restriping of data between the original storage and the new LUNs. The concatenate operation completes immediately.
During the expansion process, the host is able to process I/O to the LUN or metaLUN, and access any existing data. It does not, however, have access to any added capacity until the expansion is complete. When you can actually use the increased user capacity of the metaLUN depends on the operating system running on the servers connected to the storage system.

If you open Navisphere Manager and select any Frame/Array and click properties of Array, you will see a cache tab, which will give you cache configuration. There you need to setup configuration like LOW watermark, Hight Watermark. Did you think how CLARiiON behave on these percentage. Lets look close FLUSHING method what CLARiiON does:
There will many situation when CLARiiON Processor has to Cache Flushing to keep some free space in cache Memory.There are different size for cache memory for different series of CLARiiON.

There are three levels of flushing:
IDLE FLUSHING (LUN is not busy and user I/O continues)Idle flushing keeps some free space in write cache when I/O activity to a particular LUN is relatively low. If data immediacy were most important, idle flushing would be sufficient. If idle flushing cannot maintain free space, though, watermark flushing will be used.

WATERMARK FLUSHING The array allows the user to set two levels called watermarks: the High Water Mark (HWM) and the Low Water Mark (LWM). The base software tries to keep the number of dirty pages in cache between those two levels. If the number of dirty pages in write cache reaches 100%, forced flushing is used.

FORCED FLUSHING Forced flushes also create space for new I/Os, though they dramatically affect overall performance. When forced flushing takes place, all read and write operations are halted to clear space in the write cache. The time taken for a forced flush is very short (milliseconds), and the array may still deliver acceptable performance, even if the rate of forced flushes is in the 50 per second range.

Lets discuss about most important thing in SAN environment ZONING. Zoning is the only way to restrict access for storage to all the host. We will be discussing about Zoning in details.

There are two type of Zoning basically : Hard Zoning and Soft Zoning. Lets first define what is Zoning??

Zoning is nothing but map of host to device to device connectivity is overlaid on the storage networking fabric, reducing the risk of unauthorized access.Zoning supports the grouping of hosts, switches, and storage on the SAN, limiting access between members of one zone and resources in another.

Zoning also restricts the damage from unintentional errors that can corrupt storage allocations or destabilize the network. For example, if a Microsoft Windows server is mistakenly connected to a fabric dedicated to UNIX applications, the Windows server will write header information to each visible LUN, corrupting the storage for the UNIX servers. Similarly, Fibre Channel register state change notifications (RSCN) that keep SAN entities apprised of configuration changes, can
sometimes destabilize the fabric. Under certain circumstances, an RSCN storm will overwhelm a
switch’s ability to process configuration changes, affecting SAN performance and availability for
all users. Zoning can limit RSCN messages to the zone affected by the change, improving overall
SAN availability.

By segregating the SAN, zoning protects applications against data corruption, accidental access,
and instability. However, zoning has several drawbacks that constrain large-scale consolidated
infrastructures.

Lets first discuss what are type of Zoning and pro and cos:

As I have mentioned earlier that Zoning got two types basically you can say three but only 2 types popular in industry.

1) Soft Zoning 2) Hard Zoning 3) Broadcast Zoning

Soft Zoning : Soft zoning uses the name server to enforce zoning. The World Wide Name (WWN) of the elements enforces the configuration policy.
Pros:
- Administrators can move devices to different switch ports without manually reconfiguring
zoning. This is major flexibility to administrator. You don't need to change once you create zone set for particular device connected on switch. You create a zone set on switch and allocate storage to host. You can change any port for device connectivity

Cons:
- Devices might be able to spoof the WWN and access otherwise restricted resources.
- Device WWN changes, such as the installation of a new Host Bus Adapter (HBA) card, require
policy modifications.
- Because the switch does not control data transfers, it cannot prevent incompatible HBA
devices from bypassing the Name Server and talking directly to hosts.

Hard Zoning: - Hard Zoning uses the physical fabric port number of a switch to create zones and enforce the policy.

Pros:

- This system is easier to create and manage than a long list of element WWNs.
- Switch hardware enforces data transfers and ensures that no traffic goes between
unauthorized zone members.
- Hard zoning provides stronger enforcement of the policy (assuming physical security on the
switch is well established).

Cons:
- Moving devices to different switch ports requires policy modifications.

Broadcast Zoning: · Broadcast Zoning has many unique characteristics:
- This traffic allows only one broadcast zone per fabric.
- It isolates broadcast traffic.
- It is hardware-enforced.

If you ask me how to choose the zoning type then it is based on SAN requirement in your data center environment. But port zoning is more secure but you have to be sure that device is not going to change otherwise every time you change in storage allocation you have to modify your zoning.

Generally use in industry is soft zoning but as i have mentioned soft zoning has many cos. So, it is hard to say which one you should use always. So, analyze your datacenter environment and use proper zoning.

Broadcast zoning uses in large environment where are various fabric domain.

Having said that Zoning can be enforced either port number or WWN number but not both. When both port number and WWN specify a zone, it is a software-enforced zone. Hardware-enforced zoning is enforced at the Name Server level and in the ASIC. Each ASIC maintains a list of source port IDs that have permission to access any of the ports on that ASIC. Software-enforced zoning is exclusively enforced through selective information presented to end nodes through the fabric Simple Name Sever (SNS).

If you know about switch then you must notice that in Cisco we have FCNS database and Brocade Name Server. Both are for same purpose to store all the information about port and other. FCNS is stand for Fibre Channel Name Server.

There are plenty of thing on Switch itself to protect your SAN environment. Each vendor comes with different security policy. Zoning is the basic thing in order to secure your data access.

Hope this info will be useful for beginner. Please raise a comment if you want to know specific things.

I have been receiving mail to write on basic storage topic rather than only EMC. Here is first basic thing to know about FC technology.

Fibare Channel is nothing but just a medium to connect host and shared storage. When we talk about SAN first things comes in mind about Fibre Channel.

Fibre Channel is serial data transfer interface intended for connecting shared storage to computer. Where storage is not connected physically to host.

Why FC is most important in SAN? Because FC gives you high speed through the following process:

1) Networking and I/O Protocol such as SCSI command, are mapped to FC construct
2) Encapsulate and transported with FC frame.
3) With this, the hight speed transfer of multiple protocol is possible over same physical interface.

FC operate over copper wire or optical fibre at the rate upto 4GB/s and upto 10GB/s when used as ISL (E - Port) on supported switch.
At the same time, latency is kept very low, minimizing the delay between data requests and deliveries. For example, the latency across a typical FC switch is only a few microseconds. It is this combination of high speed and low latency that makes FC an ideal choice for time-sensitive or transactional processing environments.

These attributes also support high scalability, allowing more storage systems and servers to be interconnected.Fibre Channel is also supports a variety of topologies, and is able to operate between two devices in a simple point-to-point mode, in an economical arbitrated loop to connect up to 126 devices, or (most commonly) in a powerful switched fabric providing simultaneous full-speed connections for many thousands of devices. Topologies and cable types can easily be mixed in the same SAN.

FC is the most important in building SAN, it gives us flexibility to use protocol like FCP, FICON, IP (iSCSI, FCIP, iFCP) and uses block type data transfer.

if we want to define what is FC - Fibre Channel is a storage area networking technology designed to interconnect hosts and shared storage systems within the enterprise. It's a high-performance, high-cost technology. iSCSI is an IP-based storage networking standard that has been touted for the wide range of choices it offers in both performance and price.

Fibre Channel technology is a block-based networking approach based on ANSI standard X3.230-1994 (ISO 14165-1). It specifies the interconnections and signaling needed to establish a network "fabric" between servers, switches and storage subsystems such as disk arrays or tape libraries. FC can carry virtually any kind of traffic.

However, there are some recognized disadvantages to FC. Fibre Channel has been widely criticized for its expense and complexity. A specialized HBA card is needed for each server. Each HBA must then connect to corresponding port on a Fibre Channel Switch. creating the SAN "fabric." Every combination of HBA and switch port can cost thousands of dollars for the storage organization. This is the primary reason why many organizations connect only large, high-end storage systems to their SAN. Once LUNs are created in storage, they must be zoned and masked to ensure that they are only accessible to the proper servers or applications; often an onerous and error-prone procedure. These processes add complexity and costly management overhead to Fibre Channel SANs.

When running inq or syminq, you'll see a column titled Ser Num. This column has quite a bit of information hiding in it.

An example syminq output is below. Your output will differ slightly as I'm creating a table from a book to show this; I don't currently have access to a system where I can get the actual output just yet.

Device
Product Device
------------------------ ---------- ---------------------- ----------------------
Name Type Vendor ID Rev Ser Num Cap(KB)
---------------- ----- -------- --------- ------- --------- --------
/dev/dsk/c1t0d0
EMC SYMMETRIX 5265 73009150 459840
/dev/dsk/c1t4d0 BCV EMC SYMMETRIX 5265 73010150 459840
/dev/dsk/c1t5d0 GK EMC SYMMETRIX 5265 73019150 2880
/dev/dsk/c2t6d0 GK EMC SYMMETRIX 5265 7301A281 2880

Using the first and last serial numbers as examples, the serial number is broken out as follows:

73 Last two digits of the Symmetrix serial number
009 Symmetrix device number
15 Symmetrix director number. If <= 16, using the A processor
0 Port number on the director


73 Last two digits of the Symmetrix serial number
01A Symmetrix device number
28 Symmetrix director number. If > 16, using the B proccessor on board: (${brd}-16).
0 Port number on the director

So, the first example, device 009 is mapped to director 15, processor A, port 0 while the second example has device 01A mapped to director 12, processor B, port 0.



Even if you don't buy any of the EMC software, you can get the inq command from their web site. Understanding the serial numbers will help you get a better understanding of which ports are going to which hosts. Understanding this and documenting it will circumvent hours of rapturous cable tracings.

SYMCLI BASE Commands

symapierr - Used to translate SYMAPI error code numbers into SYMAPI error messages.
symaudit - List records from a symmetrix audit log file.
symbcv - Perform BCV support operations on Symmetrix BCV devices.
symcfg - Discover or display Symmetrix configuration information. Refresh the
host's Symmetrix database file or remove Symmetrix info from the file. Can also
be used to view or release a 'hanging' Symmetrix exclusive lock.
symchg - Monitor changes to Symmetrix devices or to logical objects stored on Symmetrix
devices.
symcli - Provides the version number and a brief description of the commands included in
the Symmetrix Command Line
symdev - Perform operations on a device given the device's Symmetrix name. Can also be
used to view Symmetrix device locks.
symdg - Perform operations on a device group (dg).
symdisk - Display information about the disks within a Symmetrix.
symdrv - List DRV devices on a Symmetrix.
symevent - Monitor or inspect the history of events within a Symmetri
symgate - Perform operations on a gatekeeper device.
symhost - Display host configuration information and performance statistics.
syminq - Issues a SCSI Inquiry command on one or all devices. Interface.
symlabel - Perform label support operations on a Symmetrix device.
symld - Perform operations on a device in a device group (dg).
symlmf - Registers SYMAPI license keys.
sympd - Perform operations on a device given the device's physical name.
symstat - Display statistics information about a Symmetrix, a Director, a device group, or a
device.
symreturn - Used for supplying return codes in pre-action and post-action script files.

SYMCLI CONTROL Commands

symacl - Administer symmetrix access control information.
symauth - Administer symmetrix user authorization information.
symcg - Perform operations on an composite group (cg).
symchksum - Administer checksum checks when an Oracle database writes
data files on Symmetrix devices.
symclone - Perform Clone control operations on a device group or on a
device within the device group.
symconfigure - Perform modifications on the Symmetrix configuration.
symconnect - Setup or Modify Symmetrix Connection Security functionalit
symmask - Setup or Modify Symmetrix Device Masking functionality.
symmaskdb - Backup, Restore, Initialize or Show the contents of
the device masking database.
symmir - Perform BCV control operations on a device group or on a
device within the device group.
symoptmz - Perform Symmetrix Optimizer control operations.
symqos - Perform Quality of Service operations on Symmetrix Devices
symrdf - Perform RDF control operations on a device group or on a
device within the device group.
symreplicate - Perform automated, consistent replication of data given
a pre-configured SRDF/Timefinder setup.
symsnap - Perform Symmetrix Snap control operations on a device
group or on devices in a device file.
symstar - Perform SRDF STAR management operations.
symrcopy - Perform Symmetrix Rcopy control operations on devices in
a device file.

SYMCLI SRM(Mapping) Commands

symhostfs - Display information about a host File, Directory,
or host File System.
symioctl - Send IO control commands to a specified application.
symlv - Display information about a volume in Logical Volume
Group (vg).
sympart - Display partition information about a host device.
symrdb - Display information about a third-party Relational
Database.
symrslv - Display detailed Logical to Physical mapping information
about a logical object stored on Symmetrix devices.
symvg - Display information about a Logical Volume Group (vg).

MDS Interoperability Mode Limitations

When a VSAN is configured for the default interoperability mode, the MDS 9000 Family of switches is limited in the following areas when interoperating with non-MDS switches:

• Interop mode only affects the specified VSAN. The MDS 9000 switch can still operate with full functionality in other non-interop mode VSANs. All switches that partake in the interoperable VSAN should have that VSAN set to interop mode, even if they do not have any end devices.

• Domain IDs are restricted to the 97 to 127 range, to accommodate McData's nominal restriction to this same range. Domain IDs can either be set up statically (the MDS 9000 switch will only accept one domain ID; if it does not get that domain ID, it isolates itself from the fabric), or preferred (if the MDS 9000 switch does not get the requested domain ID, it takes any other domain ID).

• TE ports and PortChannels cannot be used to connect an MDS 9000 switch to a non-MDS switch. Only E ports can be used to connect an MDS 9000 switch to a non-MDS switch. However, TE ports and PortChannels can still be used to connect an MDS 9000 switch to other MDS 9000 switches, even when in interop mode.

• Only the active zone set is distributed to other switches.

• In MDS SAN-OS Release 1.3(x), Fibre Channel timers can be set on a per VSAN basis. Modifying the times, however, requires the VSAN to be suspended. Prior to SAN-OS Release 1.3, modifying timers required all VSANs across the switch to be put into the suspended state.

• The MDS 9000 switch still supports the following zoning limits per switch across all VSANs:

– 2000 zones (as of SAN-OS 3.0, 8000 zones)

– 20000 aliases

– 1000 zone sets

– 20000 members

– 8000 LUN members

– 256 LUN members per zone/alias

Brocade Interoperability Mode Limitations

When interoperability mode is set, the Brocade switch has the following limitations:

• All Brocade switches should be in Fabric OS 2.4 or later.

• Interop mode affects the entire switch. All switches in the fabric must have interop mode enabled.

Msplmgmtdeactivate must be run prior to connecting the Brocade switch to either an MDS 9000 switch or a McData switch. This command uses Brocade proprietary frames to exchange platform information. The MDS 9000 switch and McData switches do not understand these proprietary frames, and rejection of these frames causes the common E ports to become isolated.

• Enabling interoperability mode is a disruptive process to the entire switch. It requires the switch to be rebooted.

• If there are no zones defined in the effective configuration, the default behavior of the fabric is to allow no traffic to flow. If a device is not in a zone, it is isolated from other devices.

• Zoning can only be done with pWWNs. You cannot zone by port numbers or nWWNs.

• To manage the fabric from a Brocade switch, all Brocade switches must be interconnected. This interconnection facilitates the forwarding of the inactive zone configuration.

Domain IDs are restricted to the 97 to 127 range to accommodate McData's nominal restriction to this same range.

• Brocade WebTools will show a McData switch or an MDS 9000 switch as an anonymous switch. Only a zoning configuration of the McData switch or the MDS 9000 switch is possible.

• Private loop targets will automatically be registered in the fabric using translative mode.

• Fabric watch is restricted to Brocade switches only.

• The full zone set (configuration) is distributed to all switches in the fabric. However, the full zone set is distributed in a proprietary format, which only Brocade switches accept. Other vendors reject these frames, and accept only the active zone set (configuration).

• The following services are not supported:

– The Alias Server


What are the CLARiiON SAN fan-in and fan-out configuration rules?"

Fan-In Rule: A server can be zoned to a maximum of four storage systems.

Fan-Out Rule:

  • For FC5300 with Access Logix software - 1 - 4 servers (eight initiators) to 1 storage system.
  • For FC4500 with Access Logix - 15 servers to 1 storage system; each server with a maximum of one (single) path to an SP.
  • For FC4700 with Base or Access Logix software 8.42.xx or higher - 32 initiators per SP port for a maximum of 128 initiators per FC4700. Each port on each SP supports 32 initiators. Ports 0 and 1 on each SP in a FC4700 handles server connections. Port 1 on each SP in a FC4700 with MirrorView also handles remote mirror connections. In a remote mirror configuration, each path between SP A port 1 on one storage system and SP A port 1 on another storage system counts as one initiator for each port 1. Likewise, each path between SP B port 1 on one storage system and SP B port 1 on another storage system counts as one initiator for each port 1.
  • For FC4700 with Base or Access Logix software 8.41.xx or lower - 15 servers to 1 storage system; each server with a maximum of one (single) path to an SP.
  • For CX200 - 15 initiators per SP, each with a maximum of one (single) path to an SP; maximum of 15 servers.
  • Fan-Out for CX300 - 64 initiators per SP for a maximum of 128 initiators per storage system.
  • For CX400 - 32 initiators per SP port for a maximum of 128 initiators per CX400. Each port on each SP supports 32 initiators. Ports 0 and 1 on each SP in a CX400 handles server connections. Port 1 on each SP in a CX400 with MirrorView also handles remote mirror connections. In a remote mirror configuration, each path between SP A port 1 on one storage system and SP A port 1 on another storage system counts as one initiator for each port 1. Likewise, each path between SP B port 1 on one storage system and SP B port 1 on another storage system counts as one initiator for each port 1.
  • Fan-Out CX500 - 128 initiators per SP and maximum of 256 initiators per CX500 available for server connections. Ports 0 and 1 on each SP handle server connections. Port 1 on each SP in a CX500 with MirrorView/A or MirrorView/S enabled also handles remote mirror connections. Each path used in a MirrorView or SAN Copy relationship between two storage system counts as an initiator for both storage systems.
  • For CX600 - 32 initiators per SP port and maximum of 256 initiators per CX600 available for server connections. Ports 0, 1, 2, and 3 on each SP in any CX600 handle server connections. Port 3 on each SP in a CX600 with MirrorView also handles remote mirror connections. In a remote mirror configuration, each path between SP-A port 3 on one storage system and SP-A port 3 on another storage system counts as one initiator for each port 3. Likewise, each path between SP-B port 3 on one storage system and SP-B port 3 on another storage system counts as one initiator for each port 3.
  • Fan-Out CX700 - 256 initiators per SP and maximum of 512 initiators per CX700 available for server connections. Ports 0, 1, 2, and 3 on each SP in any CX700 handle server connections. Port 3 on each SP in a CX700 with MirrorView/A or MirrorView/S enabled also handles remote mirror connections. Each path used in a MirrorView or SAN Copy relationship between two storage system counts as an initiator for both storage systems
  • An initiator is any device with access to an SP port. Each port on each SP supports 32 initiators. Check with your support provider to confirm that the above rules are still in effect.

Save Set Staging:

Save set staging is a process of transferring data from one storage medium to another. Staging reduces the time it takes to complete a backup by directing the initial backup to a high performance file type or adv_file device. The data can then be staged to a storage medium, freeing up the disk space. Any volume type, such as Default, Index Archive, or Default Clone, can be staged. Staging is particularly well suited for data that has been backed up on file type or adv_file devices. Staging allows the occupied disk space on file type or adv_file devices to be reclaimed so that the disk space can be used for other purposes. Use staging to move the data to more permanent storage, such as an optical or tape volume, or even another, lower-priority device. Staging also allows data to be moved off the device outside the backup period, ensuring that sufficient disk space is available for the next backup session. Additional licencing may be required.

You can create, edit, and delete staging policies as you can for other NetWorker resources. As part of the client setup, the use of a staging device can be selected for each pool (or set of pools) for backup, archive, and migration. The files are retained for the specified time in the disk staging pool before being moved to a tape device or optical disk. Any number of devices can be in the staging pool, and a save set can be staged as many times as required, for example to disk, to optical disk, to a local tape device, and to a remote tape device. Also, a volume can be staged to a second volume, and then that data on the second volume can be staged back to the first volume.

The staging process is driven by one of the following events:

- As part of an automatic process, such as keeping the save set for 30 days on the staging device before staging the data to the next device.

- As part of an event driven process, such as when available space in the staging pool drops below a set threshold. When this happens, the oldest save sets are moved first, until available space reaches the upper threshold that has been set.

- As part of an administrator initiated process, such as allowing the administrator to either reset the threshold and kick off staging or manually select save sets to stage.

When you enable a staging policy, the NetWorker server creates a clone of the save set you specify on a clone volume of the medium you specify. After the save set is staged, the save set is deleted from the filesystem to free the space.

The NetWorker server tracks the location of the save set in the media database. The retention policy for the save set does not change when the data is staged. If the file type volume is on a storage node that is running NetWorker software 6.1 or earlier, the tape is not automatically marked appendable after the staging operation.

There is available WWN decoder tool for EMC but I am going to discuss how to decode manually?
Each Symmetrix SAF port, RAF port, EF ficon port or DAF port (DMX only) has a unique worldwide name (WWN). The WWN is associated with the Tachyon chip on the director. It was intended to remain unique per director so that the director can be accessed on a storage area network. The Symmetrix SAF/RAF/DAF/EF WWN is dependent on the Symmetrix serial number, the director number, the processor letter, and the port on the processor. When the SAF/RAF/DAF is inserted into the Symmetrix, it discovers the Symmetrix serial number and slot number and the WWNs are set for the ports on the director.

Symm 4/4.8/5 (2-port or 4-port) Fibre Channel front directors, the WWN breakdown are as follows:

The director WWN (50060482B82F9654) can be broken down (in binary) as follows:

First 28 Bits (from the left, bits 63-36, binary) of WWN are assigned by the IEEE (5006048, the vendor ID for EMC Symmetrix)

5006048 2 B 8 2 F 9 6 5 4
0010 1011 1000 0010 1111 1001 0110 0101 0100

0 A E 0 B E 5 9 -----------------------> AE0BE59 hex = 182500953 Symm S/N

Bits 36 through 6 represent the Symmetrix serial number; the decode starts at bit 6 and works up to 36 to create the serial number. This is broken down as illustrated above.

The least signifigant 6 bits (bits 5 through 0) can be decoded to obtain the Symmetrix director number, processor and port. Bit 5 is used to designate the port on the processor (0 for A, 1 for B). Bit 4, known as the side bit, is used to designate the processor (0 for A, 1 for B). The least signifigant 4 bits, 3 through 0, represent the Symm slot number.


01 0100 = 14 hex -----> director 5b port A

In review, this WWN represents EMC Symmetrix serial number 182500953, director 5b port A

For Symm DMX product family (DMX-1/2/3), the WWN breakdown are as follows:

The director WWN (5006048ACCC86A32) can be broken down (in binary) as follows:

Again, like Symm 4/5, the first 28 bits (63-36) are assigned by the IEEE

5006048 A C C C 8 6 A 3 2

1010 1100 1100 1100 1000 0110 1010 0011 0010

B 3 3 2 1 A 8 ----------------------> B3321A8 hex = 187900328 Symm S/N

Bit 35 is now known as the 'Half' bit and is now used to decode which half the processor/port lie on the board.

Bits 34 through 6 represent the serial number; the decode starts at bit 6 and works up to bit 34 to create the serial number. This is broken down as illustrated above.

In conjunction with bit 35, the last 6 bits of the WWN represent the director number, processor and port. Bit 35, the 'Half' bit, represents either processor A and B, or C and D (0 for A and B, 1 for C and D). Bit 5 again represents the port on the processor (0 for A, 1 for B). Bit 4, the side bit, again represents the processor but with a slight change (if 0 then port A or C, if 1 then port B or D, depending on what the half bit is set to). The last 4 bits, 3 through 0, represent the Symm slot number.

1 11 0010 -------> half bit = 1 (either processor C or D), port bit = 1 (port B), side bit = 1 (because half = 1, looking at C and D processors only, side = 1 now means processor D)
0010 hex = 2 decimal (slot 2 or director 3)

In review, the WWN of 5006048ACCC86A32 represents EMC Symmetrix serial number 187900328, director 3d port B


Generally we never give thought about VCMDB database once we initialize first time. It does make sense when you messup or did some thing disaster. This database is most impppppportaaaaaaant for DMX. Once you loose this database means you can't get DMX configuration back at any cost. So, I am discussing different type of VCMDB on DMX.

We can now support up to 16,000/64000 addressable devices enginuity 5771 onward and therefore the Volume Control Manager Database needs to be physically larger. At 5670, as per EMC recommend CE's were encouraged to create 96 cylinder (minimum) VCMDB during new installs. This was to cater for future upgrades to 5671.

To summarize the VCMDB type applicable to DMX :

Type 3 - this can cater for 32 fibre or iSCSI initiators per port. Introduced with Enginuity 5669 and requires a 24 cylinder (minimum) VCMDB and Solutions Enabler v5.2.

Type 4 - this can cater for 64 fibre or 128 iSCSI initiators per port. Introduced with Enginuity 5670 and requires a 48 cylinder (minimum) VCMDB and Solutions Enabler v5.3.

• Type 5 - this can support 64 fibre or 128 iSCSI initiators per port AND cater for 16,000 devices. Introduced with Enginuity 5671 and requires a 96 cylinder (minimum) VCMDB and Solutions Enabler v6.0. (Note: without a type 5 96cyl VCMDB and SE 6.0 you will be restricted to 8192 logical volumes as in 5670).

Type 6 - this can support 128 fibre or 256 iSCSI initiators per port AND cater for 32,000 devices available on DMX-3 with Enginuity 5771 (at GA release). Currently the Type 6 database (at latest Enginuity 5771 with Solution Enabler 6.0 and above) will cater for 256 fibre or 512 iSCSI initiators and 64,000 logical devices.

What is requirement for Type 5:

The three requirements for a Type 5 VCM database on DMX (and support for up to 16,000 customer addressable volumes) is a correctly configured 96 cylinder VCMDB device, Enginuity 5671 and Solutions Enabler v6.0 or above. Note that the VCMBD “type” reflects the internal data structure of the Volume Control Manager Database. Therefore a 96 cylinder VCMDB size does NOT mean that you have a Type 5 VCMDB.

Note:
• At 5670 with a 48 cylinder VCMDB it is still type 4.
• At 5670 with a 96 cylinder VCMDB it is still type 4.
• At 5670 with a 96 cylinder VCMDB and SE 6.0 it is still type 4 - do not try to convert the database using the SYMCLI (EMC do not support more than 8192 logical volumes at 5670).
• At 5671 with a 48 cylinder VCMDB and SE 6.0 it is still type 4 - the VCMDB is NOT physically large enough.
• At 5671 with a 96 cylinder VCMDB and SE 5.5 it is still type 4 - the VCMDB is large enough but SE 5.5 does not support the Type 5 database.
• At 5671 with a 96 cylinder VCMDB and SE 6.0 it is a type 5 database - if you have run the “symmaskdb convert -vcm_type 5” command. Be aware that if you convert from a lower type database to a higher type, any hosts running a Solutions Enabler version that does not support the higher VCMDB type will NOT be able to access the "new" database.
• At 5771 (DMX-3) the VCMDB data now resides in the SFS volumes. At 5771 the VCMDB should be configured the SAME size as a standard FBA gatekeeper (this can be 3 cylinders due to the 64KB track size but 6 cylinder, as recommended in some guides, is also perfectly acceptable) but it must still be assigned the VCM fibre gatekeeper status. Note that the VCMDB "gatekeeper" on DMX-3 is no longer shown as "write disabled" (it is now a "gatekeeper" rather than a physical volume used for physical storage - the Volume Control Manager data is protected and stored on the internal SFS volumes).
• Note that Enginuity 5771 will ONLY support a Type 6 VCM database (again the data is resident on the SFS volumes). This re-location of the physical database to the SFS volumes caters for the increased host connectivity AND the increase in logical volumes supported with DMX-3.

About Me

My photo
Sr. Solutions Architect; Expertise: - Cloud Design & Architect - Data Center Consolidation - DC/Storage Virtualization - Technology Refresh - Data Migration - SAN Refresh - Data Center Architecture More info:- diwakar@emcstorageinfo.com
Blog Disclaimer: “The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.”
EMC Storage Product Knowledge Sharing