Here we are looking at only three possible ways in which a host can be attached to a Clariion. From talking with customers in class, these seem to be the three most common ways in which the hosts are attached.
The key points to the slide are:1. The LUN, the disk space that is created on the Clariion, that will eventually be assigned to the host, is owned by one of the Storage Processors, not both.2. The host needs to be physically connected via fibre, either directly attached, or through a switch.




CONFIGURATION ONE:

In Configuration One, we see a host that has a single Host Bus Adapter (HBA), attached to a single switch. From the Switch, the cables run once to SP A, and once to SP B. The reason this host is zoned and cabled to both SPs is in the event of a LUN trespass. In Configuration One, if SP A would go down, reboot, etc...the LUN would trespass to SP B. Because the host is cabled and zoned to SP B, the host would still have access to the LUN via SP B. The problem with this configuration is the list of Single Point(s) of Failure. In the event that you would lose the HBA, the Switch, or a connection between the HBA and the Switch (the fibre, GBIC on the switch, etc...), you lose access to the Clariion, thereby losing access to your LUNs.

CONFIGURATION TWO:

In Configuration Two, we have a host with two Host Bus Adapters. HBA1 is attached to a switch, and from there, the host is zoned and cabled to SP B. HBA2 is attached to a separate switch, and from there , the host is zoned and cabled to SP A. The path from HBA2 to SP A, is shown as the "Active Path" because that is the path data will leave the host from to get to the LUN, as it is owned by SP A. The path from HBA1 to SP B, is shown as the "Standby Path" because the LUN doesn't belong to SP B. The only time that the host would use the "Standby Path" is in the event of a LUN Trespass. The advantage of using Configuration Two over Configuration One, is that there is no single point of failure.
Now, let's say we install PowerPath on the host. With PowerPath, the host has the potential to do two things. First, it allows the host to initiate the Trespass of the LUN. With PowerPath on the host, if there is a path failure (HBA gone bad, switch down, etc...), the host will issue the trespass command to the SPs, and the SPs will move the LUN, temporarily, from SP A to SP B. The second advantage of PowerPath on a host, is that it allows the host to 'Load Balance' data from the host. Again, this has nothing to do with load balancing the Clariion SPs. We will get there later. However, in Configuration Two, we only have one connection from the host to SP A. This is the only path the host has and will use to move data for this LUN.

CONFIGURATION THREE:

In Configuration Three, hardware wise, we have the same as Configuration Two. However, notice that we have a few more cables running from the switches to the Storage Processors. HBA1 is into the switch and zoned and cabled to SP A and SP B. HBA2 is into the switch and zoned and cabled to SP A and SP B. What this does now is to give HBA1 and HBA2 an 'Active Path' to SP A, and HBA1 and HBA2, 'Standby Paths' to SP B. Because of this, the Host now can route data down each active path to the Clariion, allowing the host "Load Balancing" capabilities. Also, the only time a LUN should trespass from one SP to another is if there is a Storage Processor failure. If the host were to lose HBA1, it still has HBA2 with an active path to the Clariion. The same goes for a switch failure and connection failure.

The purpose of a MetaLUN is that a Clariion can grow the size of a LUN on the ‘fly’. Let’s say that a host is running out of space on a LUN. From Navisphere, we can “Expand” a LUN by adding more LUNs to the LUN that the host has access to. To the host, we are not adding more LUNs. All the host is going to see is that the LUN has grown in size. We will explain later how to make space available to the host.There are two types of MetaLUNs, Concatenated and Striped. Each has their advantages and disadvantages, but the end result which ever you use, is that you are growing, “expanding” a LUN. A Concatenated MetaLUN is advantageous because it allows a LUN to be “grown” quickly and the space made available to the host rather quickly as well. The other advantage is that the Component LUNs that are added to the LUN assigned to the Host can be of a different RAID type and of a different size. The host writes to Cache on the Storage Processor, the Storage Processor then flushes out to the disk. With a Concatenated MetaLUN, the Clariion only writes to one LUN at a time. The Clariion is going to write to LUN 6 first. Once the Clariion fills LUN 6 with data, it then begins writing to the next LUN in the MetaLUN, which is LUN 23. The Clariion will continue writing to LUN 23 until it is full, then write to LUN 73. Because of this writing process, there is no performance gain. The Clariion is still only writing to one LUN at a time.A Striped MetaLUN is advantageous because if setup properly could enhance performance as well as protection. Let’s look first at how the MetaLUN is setup and written to, and how performance can be gained. With the Striped MetaLUN, the Clariion writes to all LUNs that make up the MetaLUN, not just one at a time. The advantage of this is more spindles/disks. The Clariion will stripe the data across all of the LUNs in the MetaLUN, and if the LUNs are on different Raid Groups, on different Buses, this will allow the application to be striped across fifteen (15) disks, and in the example above, three back-end buses of the Clariion. The workload of the application is being spread out across the back-end of the Clariion, thereby possibly increasing speed. As illustrated above, the first Data Stripe (Data Stripe 1) that the Clariion writes out to disk will go across the five disks on Raid Group 5 where LUN 6 lives. The next stripe of data (Data Stripe 2), is striped across the five disks that make up RAID Group 10 where LUN23 lives. And finally, the third stripe of data (Data Stripe 3) is striped across the five disks that make up Raid Group 20 where LUN 73 lives. And then the Clariion starts the process all over again with LUN6, then LUN 23, then LUN 73. This gives the application 15 disks to be spread across, and three buses. As for data protection, this would be similar to building a 15 disk raid group. The problem with a 15 disk raid group is that if one disk where to fail, it would take a considerable amount of time to rebuild the failed disk from the other 14 disks. Also, if there were two disks to fail in this raid group, and it was RAID 5, data would be lost. In the drawing above, each of the LUNs is on a different RAID group. That would mean that we could lose a disk in RAID Group 5, RAID Group 10, and RAID Group 20 at the same time, and still have access to the data. The other advantage of this configuration is that the rebuilds are occurring within each individual RAID Group. Rebuilding from four disks is going to be much faster than the 14 disks in a fifteen disk RAID Group.The disadvantage of using a Striped MetaLUN is that it takes time to create. When a component LUN is added to the MetaLUN, the Clariion must restripe the data across the existing LUN(s) and the new LUN. This takes time and resources of the Clariion. There may be a performance impact while a Striped MetaLUN is re-striping the data. Also, the space is not available to the host until the MetaLUN has completed re-striping the data.

What is MetaLUN?

Posted by Diwakar ADD COMMENTS

A metaLUN is a type of LUN whose maximum capacity can be the combined capacities of all the LUNs that compose it. The metaLUN feature lets you dynamically expand the capacity of a single LUN (base LUN) into a larger unit called a metaLUN. You do this by adding LUNs to the base LUN. You can also add LUNs to a metaLUN to further increase its capacity. Like a LUN, a metaLUN can belong to a Storage Group, and can participate in SnapView, MirrorView and SAN copy sessions. MetaLUNs are supported only on CX-Series storage systems.
A metaLUN may include multiple sets of LUNs and each set of LUNs is called a component. The LUNs within a component are striped together and are independent of other LUNs in the metaLUN. Any data that gets written to a metaLUN component is striped across all the LUNs in the component. The first component of any metaLUN always includes the base LUN. The number of components within a metaLUN and the number of LUNs within a component depend on the storage system type. The following table shows this relationship:
Storage System Type LUNs Per metaLUN Component Components Per metaLUN
CX700, CX600 32 16
CX500, CX400 32 8
CX300, CX200 16 8
You can expand a LUN or metaLUN in two ways — stripe expansion or concatenate expansion. A stripe expansion takes the existing data on the LUN or metaLUN you are expanding, and restripes (redistributes) it across the existing LUNs and the new LUNs you are adding. The stripe expansion may take a long time to complete. A concatenate expansion creates a new metaLUN component that includes the new expansion LUNs, and appends this component to the existing LUN or metaLUN as a single, separate, striped component. There is no restriping of data between the original storage and the new LUNs. The concatenate operation completes immediately.
During the expansion process, the host is able to process I/O to the LUN or metaLUN, and access any existing data. It does not, however, have access to any added capacity until the expansion is complete. When you can actually use the increased user capacity of the metaLUN depends on the operating system running on the servers connected to the storage system.

If you open Navisphere Manager and select any Frame/Array and click properties of Array, you will see a cache tab, which will give you cache configuration. There you need to setup configuration like LOW watermark, Hight Watermark. Did you think how CLARiiON behave on these percentage. Lets look close FLUSHING method what CLARiiON does:
There will many situation when CLARiiON Processor has to Cache Flushing to keep some free space in cache Memory.There are different size for cache memory for different series of CLARiiON.

There are three levels of flushing:
IDLE FLUSHING (LUN is not busy and user I/O continues)Idle flushing keeps some free space in write cache when I/O activity to a particular LUN is relatively low. If data immediacy were most important, idle flushing would be sufficient. If idle flushing cannot maintain free space, though, watermark flushing will be used.

WATERMARK FLUSHING The array allows the user to set two levels called watermarks: the High Water Mark (HWM) and the Low Water Mark (LWM). The base software tries to keep the number of dirty pages in cache between those two levels. If the number of dirty pages in write cache reaches 100%, forced flushing is used.

FORCED FLUSHING Forced flushes also create space for new I/Os, though they dramatically affect overall performance. When forced flushing takes place, all read and write operations are halted to clear space in the write cache. The time taken for a forced flush is very short (milliseconds), and the array may still deliver acceptable performance, even if the rate of forced flushes is in the 50 per second range.

We have discuss about Fibre Technology in brief in earlier post. We will be discussing about FC Port Addressing and Fabric Ports. There are certain rules for Port addressing and different ports used for it. Lets summarise point for each in breif.

FC Port Addressing:

  1. FC uses a 3 Byte address identifier.
  2. Dynamically assigned during the LOGIN process.
    Reserved well known addresses used for Fabric, Alias Server, or the Multicast Server - hex'FFFFF0' to hex'FFFFFE'.
  3. hex'FFFFFF' is the Broadcast address.
  4. Arbitrated Loop addresses are 1 Byte long but still use the 3 Byte address identifier.
  5. But still a Global identifier is required and is achieved through a fixed 64 bit value called Name_Identifier or WWN.
  6. Name_Identifier is used to identify nodes (Node_Name), a Port (Port_Name) and a Fabric (Fabric_Name).
  7. Not used to route frames, but used in mapping to ULPs.

FC Ports:

  1. N_Port: Any port on a Node device e.g. a disk, a PC that provides switched interconnections.
  2. Fabric: The entity which interconnects various N_Ports attached to it and is capable of routing frames.
  3. F_Port: A port on a Fabric device that connects to a N_Port.
  4. E_Port: A port on the Fabric that connects through a link to another Fabric port (inter-element expansion port).
  5. G_Port: A Generic Fabric Port capable of behaving either as an E_Port or an F_Port. This behavior is determined at Login time.
  6. L_Port: A N_Port or an F_Port that contains Arbitrated Loop functions associated with Arbitrated Loop topology.
  7. FL_Port: A Fabric Port that may either connect to an N_Port or to an Arbitrated Loop.
  8. GL_Port: A Fabric Port that may connect either to an N_Port, to an E_Port, or to an Arbitrated Loop.
  9. S_Port: A Logical node within the Fabric, capable of communicating either with other Fabric Ports or with N_Ports.

Lets discuss about LUNz/LUN_Z in Operating System specially in CLARiiON environment. We know that what is LUN?? LUN is nothing but logical slice of disc which stands for Logical Unit Number. This terminology comes with SCSI-3 group, if you want to know more just visit www.t10.org and www.t11.org

A SCSI-3 (SCC-2) term defined as "the logical unit number that an application client uses to communicate with, configure and determine information about an SCSI storage array and the logical units attached to it. The LUN_Z value shall be zero." In the CLARiiON context, LUNz refers to a fake logical unit zero presented to the host to provide a path for host software to send configuration commands to the array when no physical logical unit zero is available to the host. When Access Logix is used on a CLARiiON array, an agent runs on the host and communicates with the storage system through either LUNz or a storage device. On a CLARiiON array, the LUNZ device is replaced when a valid LUN is assigned to the HLU LUN by the Storage Group. The agent then communicates through the storage device. The user will continue, however, to see DGC LUNz in the Device Manager.
LUNz has been implemented on CLARiiON arrays to make arrays visible to the host OS and PowerPath when no LUNs are bound on that array. When using a direct connect configuration, and there is no Navisphere Management station to talk directly to the array over IP, the LUNZ can be used as a pathway for Navisphere CLI to send Bind commands to the array.
LUNz also makes arrays visible to the host OS and PowerPath when the host’s initiators have not yet ‘logged in to the Storage Group created for the host. Without LUNz, there would be no device on the host for Navisphere Agent to push the initiator record through to the array. This is mandatory for the host to log in to the Storage Group. Once this initiator push is done, the host will be displayed as an available host to add to the Storage Group in Navisphere Manager (Navisphere Express).
LUNz should disappear once a LUN zero is bound, or when Storage Group access has been attained.To turn on the LUNz behavior on CLARiiON arrays, you must configure the "arraycommpath.

Lets discuss about most important thing in SAN environment ZONING. Zoning is the only way to restrict access for storage to all the host. We will be discussing about Zoning in details.

There are two type of Zoning basically : Hard Zoning and Soft Zoning. Lets first define what is Zoning??

Zoning is nothing but map of host to device to device connectivity is overlaid on the storage networking fabric, reducing the risk of unauthorized access.Zoning supports the grouping of hosts, switches, and storage on the SAN, limiting access between members of one zone and resources in another.

Zoning also restricts the damage from unintentional errors that can corrupt storage allocations or destabilize the network. For example, if a Microsoft Windows server is mistakenly connected to a fabric dedicated to UNIX applications, the Windows server will write header information to each visible LUN, corrupting the storage for the UNIX servers. Similarly, Fibre Channel register state change notifications (RSCN) that keep SAN entities apprised of configuration changes, can
sometimes destabilize the fabric. Under certain circumstances, an RSCN storm will overwhelm a
switch’s ability to process configuration changes, affecting SAN performance and availability for
all users. Zoning can limit RSCN messages to the zone affected by the change, improving overall
SAN availability.

By segregating the SAN, zoning protects applications against data corruption, accidental access,
and instability. However, zoning has several drawbacks that constrain large-scale consolidated
infrastructures.

Lets first discuss what are type of Zoning and pro and cos:

As I have mentioned earlier that Zoning got two types basically you can say three but only 2 types popular in industry.

1) Soft Zoning 2) Hard Zoning 3) Broadcast Zoning

Soft Zoning : Soft zoning uses the name server to enforce zoning. The World Wide Name (WWN) of the elements enforces the configuration policy.
Pros:
- Administrators can move devices to different switch ports without manually reconfiguring
zoning. This is major flexibility to administrator. You don't need to change once you create zone set for particular device connected on switch. You create a zone set on switch and allocate storage to host. You can change any port for device connectivity

Cons:
- Devices might be able to spoof the WWN and access otherwise restricted resources.
- Device WWN changes, such as the installation of a new Host Bus Adapter (HBA) card, require
policy modifications.
- Because the switch does not control data transfers, it cannot prevent incompatible HBA
devices from bypassing the Name Server and talking directly to hosts.

Hard Zoning: - Hard Zoning uses the physical fabric port number of a switch to create zones and enforce the policy.

Pros:

- This system is easier to create and manage than a long list of element WWNs.
- Switch hardware enforces data transfers and ensures that no traffic goes between
unauthorized zone members.
- Hard zoning provides stronger enforcement of the policy (assuming physical security on the
switch is well established).

Cons:
- Moving devices to different switch ports requires policy modifications.

Broadcast Zoning: · Broadcast Zoning has many unique characteristics:
- This traffic allows only one broadcast zone per fabric.
- It isolates broadcast traffic.
- It is hardware-enforced.

If you ask me how to choose the zoning type then it is based on SAN requirement in your data center environment. But port zoning is more secure but you have to be sure that device is not going to change otherwise every time you change in storage allocation you have to modify your zoning.

Generally use in industry is soft zoning but as i have mentioned soft zoning has many cos. So, it is hard to say which one you should use always. So, analyze your datacenter environment and use proper zoning.

Broadcast zoning uses in large environment where are various fabric domain.

Having said that Zoning can be enforced either port number or WWN number but not both. When both port number and WWN specify a zone, it is a software-enforced zone. Hardware-enforced zoning is enforced at the Name Server level and in the ASIC. Each ASIC maintains a list of source port IDs that have permission to access any of the ports on that ASIC. Software-enforced zoning is exclusively enforced through selective information presented to end nodes through the fabric Simple Name Sever (SNS).

If you know about switch then you must notice that in Cisco we have FCNS database and Brocade Name Server. Both are for same purpose to store all the information about port and other. FCNS is stand for Fibre Channel Name Server.

There are plenty of thing on Switch itself to protect your SAN environment. Each vendor comes with different security policy. Zoning is the basic thing in order to secure your data access.

Hope this info will be useful for beginner. Please raise a comment if you want to know specific things.

About Me

My photo
Sr. Solutions Architect; Expertise: - Cloud Design & Architect - Data Center Consolidation - DC/Storage Virtualization - Technology Refresh - Data Migration - SAN Refresh - Data Center Architecture More info:- diwakar@emcstorageinfo.com
Blog Disclaimer: “The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.”
EMC Storage Product Knowledge Sharing