3) Virtual Provisioning Phase 2 - Support for MirrorView and SAN Copy replication on thin LUNs has been added.
What is “Tier 0” in Storage Environments?
Tier "0" is not new in storage market but for implementation purposes it has been difficult to accommodate because it requires best performance and lowest latency. Enterprise Flash disks (Solid State Disks) capable to meet this requirement. It is possible to get more performance for company most critical applications. The performance can be gained through using Flash drives supported in VMAX and DMX-4 systems. Read More →
3) Virtual Provisioning Phase 2 - Support for MirrorView and SAN Copy replication on thin LUNs has been added.
I have discussed about lab exercise for Storage Administration purpose, Now, Lets talk about something technical. You understand that we can create Raid 5 protected LUN means we will use striping. So, How will you calculate stripe size of LUN.
Before calculating the stripe size of data in Clariion , we have discussed about how many disks make up the Raid Group, as well as the Raid Type.
that of the five disks that make up the Raid Group, four of the disks are used to store the data, and the remaining disk is used to store the parity information for the stripe of data in the event of a disk failure and rebuild. In Clariion settings of a disk format in which it formats the disk into 128 blocks for the Element Size (amount of blocks written to a disk before writing/striping to the next disk in the Raid Group), which is equal to the 64 KB Chunk Size of data that is written to a disk before writing/striping to the next disk in the Raid Group. To determine the Data Stripe Size, we simply calculate the number of disks in the Raid Group for Data (4) x the amount of data written per disk (64 KB), and get the amount of data written in a Raid 5, Five disk Raid Group (4 + 1) as 256 KB of data. To get the Element Stripe Size, we calculate the number of disks in the Raid Group (4) x the number of blocks written per disk (128 blocks) and get the Element Stripe Size of 512 blocks.The next example of another Raid 5 group, however the number of disks in the Raid Group is nine (9). This is combined to as 8 + 1. Again, eight (8) disks for data, and the remaining disk is used to store the parity information for the stripe of data.
To determine the Data Stripe Size, we simply calculate the number of disks in the Raid Group for Data (8) x the amount of data written per disk (64 KB), and get the amount of data written in a Raid 5, Five disk Raid Group (8 + 1) as 512 KB of data. To get the Element Stripe Size, we calculate the number of disks in the Raid Group (8) x the number of blocks written per disk (128 blocks) and get the Element Stripe Size of 1024 blocks.
In summary: The Stripe Size again is the amount of data written to a stripe of the Raid Group, and the Element Stripe Size is the number of blocks written to a stripe of a Raid Group.
Lets discuss about LUNz/LUN_Z in Operating System specially in CLARiiON environment. We know that what is LUN?? LUN is nothing but logical slice of disc which stands for Logical Unit Number. This terminology comes with SCSI-3 group, if you want to know more just visit www.t10.org and www.t11.org
A SCSI-3 (SCC-2) term defined as "the logical unit number that an application client uses to communicate with, configure and determine information about an SCSI storage array and the logical units attached to it. The LUN_Z value shall be zero." In the CLARiiON context, LUNz refers to a fake logical unit zero presented to the host to provide a path for host software to send configuration commands to the array when no physical logical unit zero is available to the host. When Access Logix is used on a CLARiiON array, an agent runs on the host and communicates with the storage system through either LUNz or a storage device. On a CLARiiON array, the LUNZ device is replaced when a valid LUN is assigned to the HLU LUN by the Storage Group. The agent then communicates through the storage device. The user will continue, however, to see DGC LUNz in the Device Manager.
LUNz has been implemented on CLARiiON arrays to make arrays visible to the host OS and PowerPath when no LUNs are bound on that array. When using a direct connect configuration, and there is no Navisphere Management station to talk directly to the array over IP, the LUNZ can be used as a pathway for Navisphere CLI to send Bind commands to the array.
LUNz also makes arrays visible to the host OS and PowerPath when the host’s initiators have not yet ‘logged in to the Storage Group created for the host. Without LUNz, there would be no device on the host for Navisphere Agent to push the initiator record through to the array. This is mandatory for the host to log in to the Storage Group. Once this initiator push is done, the host will be displayed as an available host to add to the Storage Group in Navisphere Manager (Navisphere Express).
LUNz should disappear once a LUN zero is bound, or when Storage Group access has been attained.To turn on the LUNz behavior on CLARiiON arrays, you must configure the "arraycommpath.
About Me
- Diwakar
- Sr. Solutions Architect; Expertise: - Cloud Design & Architect - Data Center Consolidation - DC/Storage Virtualization - Technology Refresh - Data Migration - SAN Refresh - Data Center Architecture More info:- diwakar@emcstorageinfo.com