I am going to demonstrate full LAB exercise of CLARiiON. If anybody interested to any specific LAB exercise please send me mail I will try to help and give LAB exercise. There are many exercise like:
1) Create RAID Group
2) Bind the LUN
3) Create Storage Group
4) Register the Host
5) Present LUN to Host
6) Create Meta LUN etc.

I will try to cover all the exercise including if you need anything extra exercise. Very Easy way to allocate the storage using Allocation wizard provided everything connected and visible to CLARiiON.

CLARiiON LAB Session -I

I am going to demonstrate LAB Exercise for Allocation Storage to Host from CX Array using Allocation Wizard of Navisphere Manager. I will be giving demo other method as well like allocating storage without wizard because some time host will not login to CX Frame. I will be discussing command line as well who are more interested in scripting.
Steps 1:
Login to Navisphere Manager ( Take any IP of any SP’s in your domain and type on browser).



You can see the all the clariion listing under each Domain.



Steps 2: Click Allocation on Left Side Menu Tree.



Steps 3: Click next once you have selected Host name (Whom you are going to present LUN)
You can select Assign LUN to this server or you can continue without assigning.



Steps 4: Select Next and Select CX frame where you want to create LUN.



Steps 5: Select Next, If you have created RAID Group It will be listed here otherwise you can create new Raid Group by selecting New Raid Group.( I will be discussing later how to create different RAID Group)


Steps 6: Select RAID Group ID and depending on Raid Group select number of disk for example if you are creating Raid 5 (3+1) then select 4 disks.
Once You have created raid group. It will list under RAID Group dialog box.Click Next and select the Number of LUN you want to create on same RAID Group. For example RAID Group created for 3+1 disk of 500 GB each disk means you can use roughly 500X4X70% GB. Now you want to create different size of each LUN on the same RAID Group



Steps 7: Once you have selected Number of LUN and Size of LUN. You can verify the configuration before you run the finish button.


Steps 8: Once you click the Finish Button you can see the status. System will create Storage Group with Server Name (You can change storage group name later) and add created LUN into storage Group.


You can verify the entire configuration by clicking storage group name:

A RAID is a redundant array of independent disks (originally redundant array of inexpensive disks). RAID is a way of storing the same data in different places (thus, redundantly) on multiple hard disks. By placing data on multiple disks, I/O (input/output) operations can overlap in a balanced way, which improves performance. Since multiple disks increases the mean time between failures (MTBF), storing data redundantly also increases fault tolerance.
A RAID appears to the operating system as a single logical hard disk. RAID employs the technique of disk striping, which involves partitioning each drive's storage space into units ranging from a sector (512 bytes) up to several megabytes. The stripes of all the disks are interleaved and addressed in order.
In a single-user system where large records, such as medical or other scientific images, are stored, the stripes are typically set up to be small (perhaps 512 bytes) so that a single record spans all disks and can be accessed quickly by reading all disks at the same time. In a multi-user system, better performance requires establishing a stripe wide enough to hold the typical or maximum size record. This allows overlapped disk I/O across drives.There are at least nine types of RAID, as well as a non-redundant array (RAID-0).
RAID-0:
This technique has striping but no redundancy of data. It offers the best performance but no fault-tolerance.
RAID-1:
This type is also known as disk mirroring and consists of at least two drives that duplicate the storage of data. There is no striping. Read performance is improved since either disk can be read at the same time. Write performance is the same as for single disk storage. RAID-1 provides the best performance and the best fault-tolerance in a multi-user system.
RAID-2:
This type uses striping across disks with some disks storing error checking and correcting (ECC) information. It has no advantage over RAID-3.
RAID-3:
This type uses striping and dedicates one drive to storing parity information. The embedded error checking (ECC) information is used to detect errors. Data recovery is accomplished by calculating the exclusive OR (XOR) of the information recorded on the other drives. Since an I/O operation addresses all drives at the same time, RAID-3 cannot overlap I/O. For this reason, RAID-3 is best for single-user systems with long record applications.
RAID-4:
This type uses large stripes, which means you can read records from any single drive. This allows you to take advantage of overlapped I/O for read operations. Since all write operations have to update the parity drive, no I/O overlapping is possible. RAID-4 offers no advantage over RAID-5.
RAID-5:
This type includes a rotating parity array, thus addressing the write limitation in RAID-4. Thus, all read and write operations can be overlapped. RAID-5 stores parity information but not redundant data (but parity information can be used to reconstruct data). RAID-5 requires at least three and usually five disks for the array. It's best for multi-user systems in which performance is not critical or which do few write operations.
RAID-6:
This type is similar to RAID-5 but includes a second parity scheme that is distributed across different drives and thus offers extremely high fault- and drive-failure tolerance.
RAID-7:
This type includes a real-time embedded operating system as a controller, caching via a high-speed bus, and other characteristics of a stand- alone computer. One vendor offers this system.
RAID-10:
Combining RAID-0 and RAID-1 is often referred to as RAID-10, which offers higher performance than RAID-1 but at much higher cost. There are two subtypes: In RAID-0+1, data is organized as stripes across multiple disks, and then the striped disk sets are mirrored. In RAID-1+0, the data is mirrored and the mirrors are striped.
RAID-50 (or RAID-5+0):
This type consists of a series of RAID-5 groups and striped in RAID-0 fashion to improve RAID-5 performance without reducing data protection.
RAID-53 (or RAID-5+3):
This type uses striping (in RAID-0 style) for RAID-3's virtual disk blocks. This offers higher performance than RAID-3 but at much higher cost.
RAID-S (also known as Parity RAID):
This is an alternate, proprietary method for striped parity RAID from Symmetrix that is no longer in use on current equipment. It appears to be similar to RAID-5 with some performance enhancements as well as the enhancements that come from having a high-speed disk cache on the disk array.

This is new series of Symmetrix family with many new features a high performance, DMX (Direct Matrix Architecture). I will discuss what DMX Architecture is later.

There are two models in DMX-4 series the DMX-4 and the DMX-4 950. DMX-4 supports full connectivity to open system and mainframe hosts like ESCON and FICON.
The DMX-4 950 represents a lower entry point for DMX technology providing open system connectivity with FICON connection for mainframe hosts.

The DMX-4 is the world’s largest high end storage array, allowing configure from 96 to 2400 drives in a single system. Yes, 2400 drives!! Means you can have peta-byte storage in one box.

Main Features of DMX 4 are:

Mainframe Connectivity
4 Gb/s back-end support
Point to Point connection
SATA II drives support
Support Enginuity 5772 ( Enginuity is the Operating System for DMX)
Improved RAID 5 performance via multiple location RAID XOR calculation
Partial sector read hit improvement
128 TimeFinder/Snap session of the same source volume.
Improvements in TimeFinder/Clone create and terminate times upto 10 times.
For SRDF, synchronous response time improvement up to 33 times.
Avoiding COFW (Copy on First Write) for TimeFinder/Clone target devices.
Symmetrix Virtual LUN
Clone to larger target device
RSA technology integrate called new feature Symmetrix Audit Log.
Improve Power Efficiency
RAID 6 supports
Separate Console to manage Symmetrix Management Console

DMX -4 Performance Feature:

Data Path: - 32-128
Data Bandwidth:- 32-128 GB/s
Message Bandwidth:- 4-6.4 GB/s
Global Memory:- 16-512 GB

DMX-4 Storage Capacity:

DMX-4 offers 73 GB, 146 GB, 300 GB and 400 GB FC Drives
DMX-4 offers 73 GB, 146 GB Flash Drives, 500 GB and 1 TB SATA II disk drives

Storage Protection:

Mirrored (RAID 1) – Not supported with Flash Drives
RAID 10, RAID 1/0 – Not supported with Flash Drives
SRDF
RAID 5(3+1) or RAID 5(7+1)
Raid 6 (6+2) or RAID 6 (14=2)- Not supported with Flash Drives.

In normal customer environment, You do day to day activity like allocation storage to different Operating System. If you do not follow best practice then you will see the host facing performance issue. Because in SAN environment, there are so many things to be consider before we present any SAN storage to any HOST. Now, I am discussing what is the best practice for DMX/SYMM? First understand Customer requirement like what application they are running, what is protection level like DMX support RAID 1/0.RAID 5(3+1) , RAID 5(7+1) etc, Are they using different disk alignment. Once you understand the Customer environment then you need to plan about disk configuration at back-end means on DMX.
In Summary decision need to be taken balance off all:

1)The number of physical disk slices
2)The number of meta volume members
3)Channel capacity
4)Host administration required
5)Performance required
6)Suitability for future expansion

As a simple general guide use the following:
(Ideal) Create 18570 volumes, all volumes striped @1MB in host stripe set
(Good) Create 18570 volumes, create metas with 4-8 members
(OK) Create 18570 volumes, create metas with 16 members
(Warn) Create 18570 volumes, create metas with 17+ members


Smaller split sizes can be used for small datasets, and combined into meta volumes for extra-performance with high-load but small-capacity applications.
Larger split sizes are more suitable for larger datasets.A good tip is to have the volumes for an application spread over a minimum 8 physical disks where possible, with only 1 volume per physical.Note most hosts can only see max 512 host volumes (1 metavolume = 1 host volume, 1 non-meta volume = 1 host volume) per channel.

Most of time you end up thinking that we had 500 GB disk but it finish without utlizing full capacity. Answer for this that whatever size you buy you do not get full amount of size becuase DMX size calculation on cylinder basis. Lets understand how DMX calculation size of disk:

1) How DMX 2 and Symmetrix Covert cylinder to GB?
GB = Cylinders * 15 * 64 * 512 divided by 1024 * 1024 * 1024
for example 18570 cylinders is 8.5 GB

2) How DMX 2 and Symmetrix Covert GB to cylinder?
Cylinders = GB / 15 / 64 / 512 then multiply result by 1024*1024*1024

3) How much capacity do I get from a drive ?
This depends on the drive and splits required.
For example a 73GB drive with 8 splits gives 8 x 18570 cyl volumes = 68 GB.

4) How DMX 3 Covert cylinder to GB?
GB = Cylinders * 15 * 128 * 512 divided by 1024 * 1024 * 1024
for example 18570 cylinders is 17,00 GB

5) How DMX 2 and Symmetrix Covert GB to cylinder?
Cylinders = GB / 15 / 128 / 512 then multiply result by 1024*1024*1024

6) How much capacity do I get from a drive ?
This depends on the drive and splits required.
For example a 73GB drive with 4 splits gives 4 x 18570cyl volumes = 68 GB.

Now, you must have understand the formula of calcuating the actual size of disk you can use on any symmetrix or DMX.

Posted by Diwakar ADD COMMENTS


EMC on Root Cause Analysis Moves into the Storage Domain

Here we are looking at only three possible ways in which a host can be attached to a Clariion. From talking with customers in class, these seem to be the three most common ways in which the hosts are attached.
The key points to the slide are:1. The LUN, the disk space that is created on the Clariion, that will eventually be assigned to the host, is owned by one of the Storage Processors, not both.2. The host needs to be physically connected via fibre, either directly attached, or through a switch.




CONFIGURATION ONE:

In Configuration One, we see a host that has a single Host Bus Adapter (HBA), attached to a single switch. From the Switch, the cables run once to SP A, and once to SP B. The reason this host is zoned and cabled to both SPs is in the event of a LUN trespass. In Configuration One, if SP A would go down, reboot, etc...the LUN would trespass to SP B. Because the host is cabled and zoned to SP B, the host would still have access to the LUN via SP B. The problem with this configuration is the list of Single Point(s) of Failure. In the event that you would lose the HBA, the Switch, or a connection between the HBA and the Switch (the fibre, GBIC on the switch, etc...), you lose access to the Clariion, thereby losing access to your LUNs.

CONFIGURATION TWO:

In Configuration Two, we have a host with two Host Bus Adapters. HBA1 is attached to a switch, and from there, the host is zoned and cabled to SP B. HBA2 is attached to a separate switch, and from there , the host is zoned and cabled to SP A. The path from HBA2 to SP A, is shown as the "Active Path" because that is the path data will leave the host from to get to the LUN, as it is owned by SP A. The path from HBA1 to SP B, is shown as the "Standby Path" because the LUN doesn't belong to SP B. The only time that the host would use the "Standby Path" is in the event of a LUN Trespass. The advantage of using Configuration Two over Configuration One, is that there is no single point of failure.
Now, let's say we install PowerPath on the host. With PowerPath, the host has the potential to do two things. First, it allows the host to initiate the Trespass of the LUN. With PowerPath on the host, if there is a path failure (HBA gone bad, switch down, etc...), the host will issue the trespass command to the SPs, and the SPs will move the LUN, temporarily, from SP A to SP B. The second advantage of PowerPath on a host, is that it allows the host to 'Load Balance' data from the host. Again, this has nothing to do with load balancing the Clariion SPs. We will get there later. However, in Configuration Two, we only have one connection from the host to SP A. This is the only path the host has and will use to move data for this LUN.

CONFIGURATION THREE:

In Configuration Three, hardware wise, we have the same as Configuration Two. However, notice that we have a few more cables running from the switches to the Storage Processors. HBA1 is into the switch and zoned and cabled to SP A and SP B. HBA2 is into the switch and zoned and cabled to SP A and SP B. What this does now is to give HBA1 and HBA2 an 'Active Path' to SP A, and HBA1 and HBA2, 'Standby Paths' to SP B. Because of this, the Host now can route data down each active path to the Clariion, allowing the host "Load Balancing" capabilities. Also, the only time a LUN should trespass from one SP to another is if there is a Storage Processor failure. If the host were to lose HBA1, it still has HBA2 with an active path to the Clariion. The same goes for a switch failure and connection failure.

About Me

My photo
Sr. Solutions Architect; Expertise: - Cloud Design & Architect - Data Center Consolidation - DC/Storage Virtualization - Technology Refresh - Data Migration - SAN Refresh - Data Center Architecture More info:- diwakar@emcstorageinfo.com
Blog Disclaimer: “The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.”
EMC Storage Product Knowledge Sharing