Showing posts with label EMC PowerPath. Show all posts
Showing posts with label EMC PowerPath. Show all posts

When connecting to an ESX host for the first time, rpowermt prompts the administrator for the username and password of the ESX host. Each new ESX host managed by rpowermt generates a prompt a username and password. The administrator is prompted for a lockbox password once, rpowermt securely stores the following information in a lockbox(encrypted
file).


-         --ESX host being accessed


-     --Username and password of ESX host being accessed,


Storing the host and passwords in a lockbox enables them to be mainitained across system reboots. If a lockbox is copied from one rpowermt server to anotherm, the user is prompted to enter the lockbox password again.


PowerPath/VE for vSphere rpowermt commands do not require root access to run the executable; however, the ESX root password is required as an entry in the lockbox, Access to rpowermt command executable is based on the native access controls of the server (Windows or RHEL) where rpowermt is installed.

PowerPath/VE  5.4 and supported ESX/ESXi versions




EMC
PowerPath/VE release

Supported VMware ESX/ESXi version

ESX
4.0

ESXi
4.0

ESX
4.0 U1/U2

ESXi
4.0 U1/U2

ESX
4.1

ESXi
4.1

5.4

Yes
- vCLI install only

Yes
- vCLI install only

Yes
- vCLI install only

Yes
- vCLI install only

No

No

5.4
SP1

Yes
- vCLI install only

Yes
- vCLI install only

Yes
- Update Manager and vCLI install

Yes
- Update Manager and vCLI install

Yes
- vCLI install only

No



Procedures:

Note:- Please check ESM for prequisites for PPVE before installing/upgrading.


& 1) vSphere hosts must be part of a DRS cluster.


2) 2) vSphere hosts must have shared storage. If there is not shared storage, create it.


3) 3) Place the first vSphere host in Maintenance Mode. This will force VMs to fail over to
other cluster members using Vmotion.


4)  4)  Install PowerPath/VE on the first vSphere host using vCLI.


Using Vmware Remote Tool (vCLI) on server, install PowerPath/VE on maintenance mode server, use the following command:



vihostupdate --server “server-IP-Address” --install --bunde=/<path>/EMCPower.VMWARE.5.4.bxxx.zip


Note:- Use vihostupdate.pl for Windows



Once the command completes, verify that package is installed. Use the following command.



vihostupdate --server  “server-IP-Address” –query

5) If necessary, make changes to the first vSphere host’s claim rules.


6)   6) Exit Maintenance Mode.


7)   7)  Reboot the first vSphere host. (Wait host to come online before proceeding next steps.)


8)   8)  Place the second vSphere host in Maintenance Mode. This will force VMs to fail over to
other clusters member using Vmotion.


9)   9)  Install PowerPath/VE on the second vSphere host using vCLI.


10)   If necessary, make changes to the second vSphere host’s claim rules.


11)   Exit Maintenance Mode.


12)   Reboot the second vSphere host.


13)   Perform the same operation for remaining hosts in the cluster.


14)   After PowerPath/VE installation has completed for every node in the cluster, rebalance the VMs across the cluster.


PowerPath Migrator Enabler is a host-base migration tools from EMC that allows you to migrate data between storage systems with little or no interruption to data access. This tool can be use in conjunction with other underlying technology such as EMC Invista, Open Replicator. PPME use the PowerPath filter dirvers to provide non-disruptive or minimally disruptive migrations. Only specific host plateforms are supported by PPME. Please check EMC support matrix for supported host systems. One of the PPME features that supports pseduo-to-pseduo, native-to-native and native-to-pseudo device migration.

Consider the following when designing and configuring PPME:

Ø Remote devices do not have to be the same RAID type or meta-configuration.
Ø Target devices must be the same size or large than the source control device.
Ø Target directors act as initiators in the SAN.
Ø Contrary to the recommendations for Open Replicator, the source device remains online during the “hot pull.”
Ø The two storage systems involved in the migration must be connected directly or through a switch, and they must be able to communicate.
Ø Every port on the target array that allows access to the target device must also have access to the source device through at least one port on the source array. This can be counter to some established zoning policies.
Ø Since PPME with Open Replicator uses FA resources, determine whether this utility will be used in a production environment. In addition, consider FA bandwidth assessments so that appropriate throttling parameters (that is, pace or ceiling).
Ø The powermig throttle parameter sets the pace of an individual migration by using the pace parameter of Open Replicator:
Ø Faster (lower throttle) makes the migration faster, but may impact application I/O performance.
Ø Slower (higher throttle) makes the migration slower.
Ø The default is five (midpoint).
Ø When setting a ceiling to limit for Open Replicator throughput for a director/port:
Ø The ceiling value is set as a percentage of a director/port’s total capacity.
Ø The ceiling can be set for a given director, port, director and port, or all director and ports in the Symmetrix array.
Ø To set ceiling values, you must use symrcopy set ceiling directly (powermig does not provide a way to do this)
Ø Once the hot pull has completed, remove or re-use the source device.
Ø Do not forget to “clean up” the zoning once you have completed migration activities.
Hope this will be useful in migration planning or selecting migration tools. I will try to explain in deatil in coming post such as PPME with Open Replicator, Solution Enabler etc..

EMC introduced PowerPath Confgiuration Checker tools for customer. I thought to share about this. It will be very useful for those guys who are using PowerPath as a host fail-over software. It checks existing configuration with EMC support matrix and give you details reports about your configuration whether existing configuration is as EMC support guidelines. This tool is currently available for Windows Operating system.

It tests the following check:

· OS version verification
· Machine Architecture as per ESM(EMC Support Matrix)
· Powerpath Version
· Powerpath eFix
· Powerpath License
· License policy
· I/O timeout
· EOL and EOSL ( End of life and End of Service life)
· HBA Model
· HBA Driver
· HBA Firmware
· Symmetrix Microcode
· Symmetrix Model
· CLARiiON Fail-Over
· CLARiiON Flare Code
· CLARiiON Model
· Veritas DMP Version
· Powermt custom


PowerPath Configuration Checker (PPCC) is a software program that verifies that a host is configured to the hardware and software required for PowerPath multipathing features (failover and load-balancing functions, licensing, and policies)

PPCC can facilitate:
1) Successful PowerPath deployments prior to and after a PowerPath installation.
2) Customer self-service for:
• Planning installations on hosts where PowerPath is not installed.
• Upgrading an existing installation.
• Troubleshooting, for example after configuration changes are made on a host that includes PowerPath, such as the installation of new software

PPCC supports the following user tasks:

Planning — This task applies to a host on which PowerPath has never been installed or is not currently installed. PPCC can identify the software that needs to be installed to support a specific version of PowerPath. For example, PPCC can identify the HBA and driver version that can be installed to support a specific version of PowerPath.

Upgrade — This task applies to a host on which some version of PowerPath is installed. An upgrade (or downgrade) to a different version is required. PPCC can identify components of a configuration that need to change when a different version of PowerPath is to be installed. For example, PPCC can identify the
need to change the Storage OS version.

Diagnostic — This task applies to a host on which some version of PowerPath is installed or on which configuration changes have been made to PowerPath, to the host OS, and/or to other software on the host. This is the PPCC default mode.

For all of the listed tasks, PPCC can identify what changes to make to the PowerPath configuration to ensure continued support for failover and load balancing. Similarly, if PowerPath does not appear to be operating correctly, running EMC Reports and PPCC can assist with configuration problem analysis.

PowerPath Migration Enabler is a host-based software product that enables other technologies, such as array-based replication and virtualization, to eliminate -application downtime during data migrations or virtualization implementations. PowerPath Migration Enabler allows EMC Open Replicator for Symmetrix and EMC Invista customers to eliminate downtime during data migrations from EMC storage to Symmetrix, and during virtualized deployments to Invista.. PowerPath Migration Enabler—which leverages the same underlying technology as PowerPath—keeps arrays in synch during Open Replicator for Symmetrix data migrations, with minimal impact to host resources. It also enables seamless deployment of Invista virtualized environments by encapsulating (or bringing under its control)

the volumes that will be virtualized. In addition,
EMC PowerPath boasts the following benefits:

PowerPath Migration Enabler with Open Replicator for Symmetrix:
¨ Eliminates planned downtime
¨ Provides flexibility in time to perform migration
PowerPath Migration Enabler with EMC Invista:
¨ Eliminates planned downtime
¨ Eliminates need for data copy and additional storage for data migration
¨ I/O redirection allows Administrators to “preview” deployment without committing to redirection

Lets first disuss what is Powerpath software. If you are familiar with EMC product and then definitelly will be using EMC Powerpath software.
Those who are new to storage world, it will interesting to know about this product as there are only few software in this category like DMP,PVLINK, LVM etc from other vendor. This sofware is one of the most robust compare to other, thats reason EMC generationg more revenue out of this Product.... .

EMC PowerPath software is Host/Server based failover software, what is that mean failover. failover can be anything like server,HBA,Fabric etc. If you have fully licenced package in your enviornment then you will have all capability. Most important not least this software got good feature like dynamic IO Loading and Automatic Failure detection which is missing in other product. Basically in short we can define that EMC PowerPath provides you to have HA configuration. EMC Powerpath slogan is like "set it and forget".

EMC PowerPath features a driver residing on the host above HBA Device Layer. This transparent componenet allows PowerPath to create Virtual(power) devices that provide failure resistant and load balanced paths to storage systems. An application needs only to reference a virt ual device while Powerpath manages path allocation to storage system.
With PowerPath, the route between server and storage system can assume a complex path. One powerpath device include as many as 32 physical I/O paths ( 16 for Clariion), with each path designated to the operating system by different name.
In most cases, the application must be reconfigured to use pseudo devices, otherwise PowerPath load balancing and path failover functionality will not be available.




The following describe whether application need to be reconfigure to use pseudo devices.
1) Windows Plateform :- No. ( Application not require to reconfigured to use Pseudo Devices )
2) AIX :- NO- For LVM, Yes, if applicaiton do not use LVM
3) HP-UX - NO
4) Solaris :- Yes, Incluing filesystem mounting tables and volume managers.
5) Linux :- Same as Solaris
if you attach new LUN to Host, powerpath automatically detect that LUN if you have exposed correctly and create device name like emcpower1c, emcpower2c etc, even when you type command on CLI like
#powermt display dev=all;
you will be able to device entry like emcpowerN.....
Hope this will help you to understand why powerpath uses pseudo devices?

What are the differences between failover modes on a CLARiiON array?

A CLARiiON array is an Active/Passive device and uses a LUN ownership model. In other words, when a LUN is bound it has a default owner, either SP-A or SP-B. I/O requests traveling to a port SP-A can only reach LUNs owned by SP-A and I/O requests traveling to a port on SP-B can only reach LUNs owned SP-B. It is necessary to have different failover methods because in certain situations a host will need to access a LUN on the non-owning SP.

The following failover modes apply:

Failover Mode 0

LUN Based Trespass Mode This failover mode is the default and works in conjunction with the Auto-trespass feature. Auto-trespass is a mode of operation that is set on a LUN by LUN basis. If Auto-Trespass is enabled on the LUN, the non-owning SP will report that the LUN exists and is available for access. The LUN will trespass to the SP where the I/O request is sent. Every time the LUN is trespassed a Unit Attention message is recorded. If Auto-trespass is disabled, the non-owning SP will report that the LUN exists but it is not available for access. If an I/O request is sent to the non-owning SP, it is rejected and the LUN’s ownership will not change.
Note: The combination of Failover Mode 0 and Auto-Trespass can be dangerous if the host is sending I/O requests to both SP-A and SP-B because the LUN will need to trespass to fulfill each request. This combination is most commonly seen on an HP-UX server using PV-Links. The Auto-trespass feature is enabled through the Initiator Type setting of HP-AutoTrespass. A host with no failover software should use the combination of Failover Mode 0 and Auto-trespass disabled.

Failover Mode 1 – Passive Not Ready Mode In this mode of operation the non-owning SP will report that all non-owned LUNs exist and are available for access. Any I/O request that is made to the non-owning SP will be rejected. A Test Unit Ready (TUR) command sent to the non-owning SP will return with a status of device not ready. This mode is similar to Failover Mode 0 with Auto-Trespass disabled. Note: This mode is most commonly used with PowerPath. To a host without PowerPath, and configured with Failover Mode 1, every passive path zoned, for example, a path to SP-B for a LUN owned by SP-A, will show to the server as Not Ready. This will show as with offline errors on a Solaris server, SC_DISK_ERR2 errors with sense bytes 0102, 0700, and 0403 on an AIX server or buffer to I/O errors on a Linux server. If PowerPath is installed, these types of messages should not occur.

Failover Mode 2 – DMP Mode In this mode of operation the non-owning SP will report that all non-owned LUNs exist and are available for access. This is similar to Failover Mode 0 with Auto-trespass Enabled. Any I/O requests made to the non-owning SP will cause the LUN to be trespassed to the SP that is receiving the request. The difference between this mode and Auto-trespass mode is that Unit Attention messages are suppressed. Note: This mode is used for some Veritas DMP configurations on some operating systems. Because of the similarities to Auto-Trespass, this mode has been known to cause “Trespass Storms.” If a server runs a script that probes all paths to the Clariion, for instance format on a Solaris server, the LUN will trespass to the non owning SP when the I/O request is sent there. If this occurs for multiple LUNs, a significant amount of trespassing will occur.

Failover Mode 3 – Passive Always Ready Mode In this mode of operation the non-owning SP will report that all non-owned LUNs exist and are available for access. Any I/O requests sent to the Non-owning SP will be rejected. This is similar to Failover Mode 1. However, any Test Unit Ready command sent from the server will return with a success message, even to the non-owning SP. Note: This mode is only used on AIX servers under very specific configuration parameters and has been developed to better handle a CLARiiON non-disruptive upgrade (NDU) when AIX servers are attached.

DMP With CLARiiON:-

CLARiiON arrays are active-passive devices that allow only one path at a time to be used for I/O. The path that is used for I/O is called the active or primary path. An alternate path (or secondary path) is configured for use in the event that the primary path fails. If the primary path to the array is lost, DMP automatically routes I/O over the secondary path or other available primary paths.

For active/passive disk arrays, VxVM uses the available primary path as long as it is accessible. DMP shifts I/O to the secondary path only when the primary path fails. This is called "failover" or "standby" mode of operation for I/O. To avoid the continuous transfer of ownership of LUNs from one controller to another, which results in a severe slowdown of I/O, do not access any LUN on other than the primary path (which could be any of four available paths on a FC4700 and CX-Series arrays).

Note: DMP does not perform load balancing across paths for active-passive disk arrays.

DMP failover functionality is supported and should attempt to limit any scripts or processes from using the passive paths to the CLARiiON array. This will prevent DMP from causing unwanted LUN trespasses.

To view potential trespasses, look at the ktrace (kt_std) information from SPcollect, messages similar the following can be seen happening with regularity.

09:07:31.995 412 820f6440 LUSM Enter LU 34 state=LU_SHUTDOWN_TRESPASS
09:07:35.970 203 820f6440 LUSM Enter LU 79 state=LU_SHUTDOWN_TRESPASS
09:07:40.028 297 820f6440 LUSM Enter LU 13 state=LU_SHUTDOWN_TRESPASS
09:07:42.840 7 820f6440 LUSM Enter LU 57 state=LU_SHUTDOWN_TRESPASS

The "Enter LU ##" is the decimal array LUN number one would see in the Navisphere Manager browser. When the messages occur, there will be no 606 trespass messages in the SP event logs. This is an indication that thetrespasses are the 'masked out' DMP trespass messages. Executing I/Os to the /dev/dsk device entry will cause this to happen.

Using the SPcollect SP_navi_getall.txt file, check the storagegroup listing to find out which hosts these LUNs belong to. Then obtain an EMCGrab/EMCReport from the affected hosts and you will need to look for a host-based process that could potentially be sending I/O down the 'passive' path. Those I/Os can be caused by performance scripts, format or devfsadm commands being run or even host monitoring software that polls all device paths.
One workaround is to install and configure EMC PowerPath. PowerPath disables the auto trespass mode and is designed to handle I/O requests properly so that the passive path is not used unless required. This will require changing the host registration parameter "failover mode" to a '1'. This failover mode is termed an "explicit mode" and it will resolve the type of trespass issues noted above.

Setting Failover Values for Initiators Connected to a Specific Storage System:

Navisphere Manager lets you edit or add storage system failover values for any or all of the HBA initiators that are connected to a storage system and displayed in the Connectivity Status dialog box for that storage system.

1. In the Enterprise Storage dialog box, navigate to the icon for the storage system whose failover properties you want to add or edit.
2. Right-click the storage system icon, and click Connectivity Status.
3. In the Connectivity Status dialog box, click Group Edit to open the Group Edit Initiators dialog box.
4. Select the initiators whose New Initiator Information values you want to add or change, and then add or edit the values in Initiator Type, ArrayCommPath and Failover Mode.
5. Click OK to save the settings and close the dialog box.
Navisphere updates the initiator records for the selected initiators, and registers any unregistered initiators.

Background Verify and Trespassing

Background Verify must be run by the SP that currently owns the LUN. Trespassing is a means of transferring current ownership of a LUN from one SP to the other. Therefore, aborting a Background Verify is part of the trespass operation – it is a necessary step.

SUN/SOLARIS
___________
To display what HBA's are installed.

#prtdiag -v
#dmesg
#cat /var/adm/messages grep -i wwn more


To set the configuration you must carry out the following:

-changes to the /etc/system file
-HBA driver modifications
-Persistent binding (HBA and SD driver config file)
-EMC recommended changes
-Install the Sun StorEdge SAN Foundation package


Changes to /etc/system: ( Plz ignore 3 equal sign when u edit the file)

SCSI throttle === set sd:sd_max_throttle=20
Enable wide SCSI === set scsi_options=0x7F8
SCSI I/O timeout value === sd:sd_io_time=0x3c (with powerpath)
sd:sd_io_time=0x78 (without powerpath

Changes to HBA driver (/kernel/drv/lpfc.conf):

fcp-bind-WWNN=16
automap=2
fcp-on=1
lun-queue-depth=20
tgt-queue-depth=512
no-device-delay=1 (without PP/DMP) 0 (with PP/DMP)
xmt-que-size=256
scan-down=0
linkdown-tmo=0 (without PP/DMP) 60 (with PP/DMP)

Persistent Binding
Both the lpfc.conf and sd.conf files need to be updated. General format is
name="sd" parent="lpfc" target="X" lun="Y" hba="lpfcZ"
X is the target number that corresponds to the fcp_bindWWNID lpfcZtXY is the LUN number that corresponds to symmetrix volume mapping on the symmetrix port WWN or HLU on the clariionZ is the lpfc drive instance number that corresponds to the fcp_bind_WWID lpfcZtX

To discover the SAN devices
#disk;devlinks;devalias (solaris 2.6)
#devfsadm (solaris 2.8)
#/usr/sbin/update_drv -f sd (solaris 2.9 >)

Windows
To display what HBA's are installed. use admin tool "device manager"
To set the configuration you must carry out the following:
#Registry edits
#EMC recommended changes
#Install emulex exlcfg utility

Arbitrated loop without powerpath/ATF:-

InitLinkFlags=0x00000000 (arbitrated loop, auto-link speed)
WaitReady=45
LinkDown=45
TranslateQueueFull=1

Arbitrated loop with powerpath/ATF:

InitLinkFlags=0x00000000 (arbitrated loop, auto-link speed)
WaitReady=10
LinkDown=10

Fabric without powerpath/ATF:

InitLinkFlags=0x00000002 (fabric, auto-link speed
WaitReady=45
LinkDown=45
TranslateQueueFull=1

Fabric with powerpath/ATF:

InitLinkFlags=0x00000002 (fabric, auto-link speed)
WaitReady=10
LinkDown=10

Modifying the EMC environment :
In the shortcut for the elxcfg add the "--emc" option to the target option.

To discover the SAN devices
control panel -> admin tools -> computer management -> select disk management -> (top menu)action -> rescan tools

HP

To display what HBA's are installed.

#/opt/fcms/bin/fcmsutil /dev/td# (A5158A HBA)
#/opt/fc/bin/fcutil /dev/fcs# (A6685A HBA)

On a HP system there is no additional software to install. The HP systems Volume address setting must be enabledon the SAN, you can check this will the following command.

#symcfg -sid -FA all list (confirm that the volume set addressing is set to yes)

To discover the SAN devices:
#ioscan -fnC disk (scans hardware busses for devices according to class)
#insf -e (install special device files)

AIX

To display what HBA's are installed
#lscfg
#lscfg -v -l fcs*


To set the configuration you must carry out the following:
#List HBA WWN and entry on system
#Determine code level of OS and HBA
#Download and install EMC ODM support fileset

#run /usr/lpp/Symmetrix/bin/emc_cfgmgr (symmetrix) #or /usr/lpp/emc/CLARiiON/bin/emc_cfgmgr (clariion)

To discover the SAN devices
#/usr/lpp/EMC/Symmetrix/bin/emc_cfgmgr -v
if the above does not work reboot server

Hope this will be documented and useful info for novice user.

LUN Management

Posted by Diwakar ADD COMMENTS

LUN Basics

Simply stated, a LUN is a logical entity that converts raw physical disk space into logical storage space that a host server's operating system can access and use. Any computer user recognizes the logical drive letter that has been carved out of their disk drive. For example, a computer may boot from the C: drive and access file data from a different D: drive. LUNs do the same basic job. "LUNs differentiate between different chunks of disk space. "A LUN is part of the address of the storage that you're presenting to a [host] server."

LUNs are created as a fundamental part of the storage provisioning process using software tools that typically accompany the particular storage platform. However, there is not a 1-to-1 ratio between drives and LUNs. Numerous LUNs can easily be carved out of a single disk drive. For example, a 500 GB drive can be partitioned into one 200 GB LUN and one 300 GB LUN, which would appear as two unique drives to the host server. Conversely, storage administrators can employ Logical Volume Manager software to combine multiple LUNs into a larger volume. Veritas Volume Manager from Symantec Corp. is just one example of this software. In actual practice, disks are first gathered into a RAID group for larger capacity and redundancy (e.g., RAID-50), and then LUNs are carved from that RAID group.

LUNs are often referred to as logical "volumes," reflecting the traditional use of "drive volume letters," such as volume C: or volume F: on your computer. But some experts warn against mixing the two terms, noting that the term "volume" is often used to denote the large volume created when multiple LUNs are combined with volume manager software. In this context, a volume may actually involve numerous LUNs and can potentially confuse storage allocation. "The 'volume' is a piece of a volume group, and the volume group is composed of multiple LUNs,"
Once created, LUNs can also be shared between multiple servers. For example, a LUN might be shared between an active and standby server. If the active server fails, the standby server can immediately take over. However, it can be catastrophic for multiple servers to access the same LUN simultaneously without a means of coordinating changed blocks to ensure data integrity. Clustering software, such as a clustered volume manager, a clustered file system, a clustered application or a network file system using NFS or CIFS, is needed to coordinate data changes.

SAN zoning and masking

LUNs are the basic vehicle for delivering storage, but provisioning SAN storage isn't just a matter of creating LUNs or volumes; the SAN fabric itself must be configured so that disks and their LUNs are matched to the appropriate servers. Proper configuration helps to manage storage traffic and maintain SAN security by preventing any server from accessing any LUN.
Zoning makes it possible for devices within a Fibre Channel network to see each other. By limiting the visibility of end devices, servers (hosts) can only see and access storage devices that are placed into the same zone. In more practical terms, zoning allows certain servers to see one or more ports on a disk array. Bandwidth, and thus minimum service levels, can be reserved by dedicating certain ports to a zone or isolate incompatible ports from one another.
Consequently, zoning is an important element of SAN security and high-availability SAN design. Zoning can typically be broken down into hard and soft zoning. With hard zoning, each device is assigned to a zone, and that assignment can never change. In soft zoning, the device assignments can be changed by the network administrator.
LUN masking adds granularity to this concept. Just because you zone a server and disk together doesn't mean that the server should be able to see all of the LUNs on that disk. Once the SAN is zoned, LUNs are masked so that each host server can only see specific LUNs. For example, suppose that a disk has two LUNs, LUN_A and LUN_B. If we zoned two servers to that disk, both servers would see both LUNs. However, we can use LUN masking to allow one server to see only LUN_A and mask the other server to see only LUN_B. Port-based LUN masking is granular to the storage array port, so any disks on a given port will be accessible to any servers on that port. Server-based LUN masking is a bit more granular where a server will see only the LUNs assigned to it, regardless of the other disks or servers connected.

LUN scaling and performance
LUNs are based on disks, so LUN performance and reliability will vary for the same reasons. For example, a LUN carved from a Fibre Channel 15K rpm disk will perform far better than a LUN of the same size taken from a 7,200 rpm SATA disk. This is also true of LUNs based on RAID arrays where the mirroring of a RAID-0 group may offer significantly different performance than the parity protection of a RAID-5 or RAID-6/dual parity (DP) group. Proper RAID group configuration will have a profound impact on LUN performance.
An organization may utilize hundreds or even thousands of LUNs, so the choice of storage resources has important implications for the storage administrator. Not only is it necessary to supply an application with adequate capacity (in gigabytes), but the LUN must also be drawn from disk storage with suitable characteristics. "We go through a qualification process to understand the requirements of the application that will be using the LUNs for performance, availability and cost," For example, a LUN for a mission-critical database application might be taken from a RAID-0 group using Tier-1 storage, while a LUN slated for a virtual tape library (VTL) or archive application would probably work with a RAID-6 group using Tier-2 or Tier-3 storage.

LUN management tools
A large enterprise array may host more than 10,000 LUNs, so software tools are absolutely vital for efficient LUN creation, manipulation and reporting. Fortunately, management tools are readily available, and almost every storage vendor provides some type of management software to accompany products ranging from direct-attached storage (DAS) devices to large enterprise arrays.
Administrators can typically opt for vendor-specific or heterogeneous tools. A data center with one storage array or a single-vendor shop would probably do well with the indigenous LUN management tool that accompanied their storage system. Multivendor shops should at least consider heterogeneous tools that allow LUN management across all of the storage platforms. Mack uses EMC ControlCenter for LUN masking and mapping, which is just one of several different heterogeneous tools available in the marketplace. While good heterogeneous tools are available, he advises caution when selecting a multiplatform tool. "Sometimes, if the tool is written by a particular vendor, it will manage 'their' LUNs the best," he says. "LUNs from the other vendors can take the back seat -- the management may not be as well integrated."
In addition to vendor support, a LUN management tool should support the entire storage provisioning process. Features should include mapping to specific array ports and masking specific host bus adapters (HBA), along with comprehensive reporting. The LUN management tool should also be able to reclaim storage that is no longer needed. Although a few LUN management products support autonomous provisioning, experts see some reluctance toward automation. "It's hard to do capacity planning when you don't have any checks and balances over provisioning," Mack says, also noting that automation can circumvent strict change control processes in an IT organization.

LUNs at work

Significant storage growth means more LUNs, which must be created and managed efficiently while minimizing errors, reigning in costs and maintaining security. For Thomas Weisel Partners LLC, an investment firm based in San Francisco, storage demands have simply exploded to 80 terabytes (TB) today -- up from about 8 TB just two years ago. Storage continues to flood the organization's data center at about 2 TB to 3 TB each month depending on projects and priorities.
This aggressive growth pushed the company out of a Hitachi Data Systems (HDS) storage array and into a 3PARdata Inc. S400 system. LUN deployment starts by analyzing realistic space and performance requirements for an application. "Is it something that needs a lot of fast access, like a database or something that just needs a file share?" asks Kevin Fiore, director of engineering services at Thomas Weisel. Once requirements are evaluated, a change ticket is generated and a storage administrator provisions the resources from a RAID-5 or RAID-1 group depending on the application. Fiore emphasizes the importance of provisioning efficiency, noting that the S400's internal management tools can provision storage in just a few clicks.
Fiore also notes the importance of versatility in LUN management tools and the ability to move data. "Dynamic optimization allows me to move LUNs between disk sets," he says. Virtualization has also played an important role in LUN management. VMware has allowed Fiore to consolidate about 50 servers enterprise-wide along with the corresponding reduction in space, power and cooling. this lets the organization manage more storage with less hardware.
LUNs getting large
As organizations deal with spiraling storage volumes, experts suggest that efficiency enhancing features, such as automation, will become more important in future LUN management. Experts also note that virtualization and virtual environments will play a greater role in tomorrow's LUN management. For example, it's becoming more common to provision very large chunks of storage (500 GB to 1 TB or more) to virtual machines. "You might provision a few terabytes to a cluster of VMware servers, and then that storage will be provisioned out over time.

About Me

My photo
Sr. Solutions Architect; Expertise: - Cloud Design & Architect - Data Center Consolidation - DC/Storage Virtualization - Technology Refresh - Data Migration - SAN Refresh - Data Center Architecture More info:- diwakar@emcstorageinfo.com
Blog Disclaimer: “The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.”
EMC Storage Product Knowledge Sharing