Showing posts with label CLARiiON LUN. Show all posts
Showing posts with label CLARiiON LUN. Show all posts

CLARiiON Flare release 29 (04.29.000.5.001) introduce support for several new features as follow:

1) Virtualization-aware Navisphere Manager - Discovery of VMware client always were difficult in earlier release but Flare 29 enables CLARiiON CX4 users and VMware administrators to reduce infrastructure reporting time from hours to minutes. Earlier releases have allowed only a single IP address to be assigned to each iSCSI physical port. With Flare 29, the ability to define multiple virtual iSCSI ports on each physical port has been added along with the ability to tag each virtual port with unique VLAN tag. VLAN tagging has also been added to the single Management Port interface. It should be noted that the IP address and VLAN tag assignments should be carefully coordinated with those supporting the network infrastructure where the storage system will operate

2) Built-in policy-based spin down of idle SATA II drives for CLARiiON CX4 - Lowers power requirements in environments such as test and development in physical and virtual environments. Features include a simple management via a “set it and forget it” policy, complete spin down of inactive drives during times of zero I/O activity, and drives automatically spin back up after a "first I/O" request is received.

3) Virtual Provisioning Phase 2 - Support for MirrorView and SAN Copy replication on thin LUNs has been added.

4) Search feature – Provides users with the ability to search for a wide-variety of objects across their storage systems. Objects can be either logical (e.g., LUN) or physical (e.g., disks).

5) Replication roles - Three additional roles have to be added in Navisphere: “Local Replication Only”, “Replication” and “Replication/ Recovery”.

6) Dedicated VMware software files - VMware software files (i.e. NaviSecCLI, Navisphere Initialization Wizard) are now separate from those of the Linux Operating System.

7) Software filename standardization - all CLARiiON software filenames beginning with FLARE Release 29

8) Changing SP IP addresses - SP IP addresses can now be changed without rebooting the SP. Only the Management Sever will need to be rebooted from the Setup page, which results in no storage system downtime.

9) Linux 64-bit server software – Native 64-bit Linux server software files simplify installation by eliminating the need to gather and load 32-bit DLLs.

10) Solaris x64 Navisphere Host Agent – Release 29 marks the introduction of Solaris 64-bit Navisphere Host Agent software. This Host Agent is backward compatible with older FLARE release.

I have discussed about lab exercise for Storage Administration purpose, Now, Lets talk about something technical. You understand that we can create Raid 5 protected LUN means we will use striping. So, How will you calculate stripe size of LUN.


Before calculating the stripe size of data in Clariion , we have discussed about how many disks make up the Raid Group, as well as the Raid Type.






I have two examples of Stripe Size of a LUN. The above xample shows a Raid 5, five disk Raid Group.As I mentioned in earlier post this referred to as 4 + 1. That means that of the five disks that make up the Raid Group, four of the disks are used to store the data, and the remaining disk is used to store the parity information for the stripe of data in the event of a disk failure and rebuild. In Clariion settings of a disk format in which it formats the disk into 128 blocks for the Element Size (amount of blocks written to a disk before writing/striping to the next disk in the Raid Group), which is equal to the 64 KB Chunk Size of data that is written to a disk before writing/striping to the next disk in the Raid Group. To determine the Data Stripe Size, we simply calculate the number of disks in the Raid Group for Data (4) x the amount of data written per disk (64 KB), and get the amount of data written in a Raid 5, Five disk Raid Group (4 + 1) as 256 KB of data. To get the Element Stripe Size, we calculate the number of disks in the Raid Group (4) x the number of blocks written per disk (128 blocks) and get the Element Stripe Size of 512 blocks.
The next example of another Raid 5 group, however the number of disks in the Raid Group is nine (9). This is combined to as 8 + 1. Again, eight (8) disks for data, and the remaining disk is used to store the parity information for the stripe of data.
To determine the Data Stripe Size, we simply calculate the number of disks in the Raid Group for Data (8) x the amount of data written per disk (64 KB), and get the amount of data written in a Raid 5, Five disk Raid Group (8 + 1) as 512 KB of data. To get the Element Stripe Size, we calculate the number of disks in the Raid Group (8) x the number of blocks written per disk (128 blocks) and get the Element Stripe Size of 1024 blocks.
In summary: The Stripe Size again is the amount of data written to a stripe of the Raid Group, and the Element Stripe Size is the number of blocks written to a stripe of a Raid Group.

Lets discuss about LUNz/LUN_Z in Operating System specially in CLARiiON environment. We know that what is LUN?? LUN is nothing but logical slice of disc which stands for Logical Unit Number. This terminology comes with SCSI-3 group, if you want to know more just visit www.t10.org and www.t11.org

A SCSI-3 (SCC-2) term defined as "the logical unit number that an application client uses to communicate with, configure and determine information about an SCSI storage array and the logical units attached to it. The LUN_Z value shall be zero." In the CLARiiON context, LUNz refers to a fake logical unit zero presented to the host to provide a path for host software to send configuration commands to the array when no physical logical unit zero is available to the host. When Access Logix is used on a CLARiiON array, an agent runs on the host and communicates with the storage system through either LUNz or a storage device. On a CLARiiON array, the LUNZ device is replaced when a valid LUN is assigned to the HLU LUN by the Storage Group. The agent then communicates through the storage device. The user will continue, however, to see DGC LUNz in the Device Manager.
LUNz has been implemented on CLARiiON arrays to make arrays visible to the host OS and PowerPath when no LUNs are bound on that array. When using a direct connect configuration, and there is no Navisphere Management station to talk directly to the array over IP, the LUNZ can be used as a pathway for Navisphere CLI to send Bind commands to the array.
LUNz also makes arrays visible to the host OS and PowerPath when the host’s initiators have not yet ‘logged in to the Storage Group created for the host. Without LUNz, there would be no device on the host for Navisphere Agent to push the initiator record through to the array. This is mandatory for the host to log in to the Storage Group. Once this initiator push is done, the host will be displayed as an available host to add to the Storage Group in Navisphere Manager (Navisphere Express).
LUNz should disappear once a LUN zero is bound, or when Storage Group access has been attained.To turn on the LUNz behavior on CLARiiON arrays, you must configure the "arraycommpath.

About Me

My photo
Sr. Solutions Architect; Expertise: - Cloud Design & Architect - Data Center Consolidation - DC/Storage Virtualization - Technology Refresh - Data Migration - SAN Refresh - Data Center Architecture More info:- diwakar@emcstorageinfo.com
Blog Disclaimer: “The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.”
EMC Storage Product Knowledge Sharing