Showing posts with label HBA. Show all posts
Showing posts with label HBA. Show all posts

Registering Fibre Channel HBAs or iSCSI NICs with the storage system enables the storage system to see the HBAs or NICs. To register HBAs or NICs with the storage system, you will start or restart the Navisphere Agent on the host.
Microsoft Windows
To register the host’s HBAs with the storage system, start the Navisphere Agent as follows:
1. On the Windows host, right click My Computer and select Manage.
2. Click Services and Applications and then click Services.
3. Find EMC Navisphere Agent service.
4. If already started, stop the EMC Navisphere Agent service.
5. Start the EMC Navisphere Agent service.

AIX
To register the host’s HBAs with the storage system, on the AIX host, stop and start the Navisphere Agent. For example:

# rc.agent stop
# rc.agent start

HP-UX

To register the host’s HBAs with the storage system, on the HP-UX host, stop and start the Navisphere Agent. For example:

# /sbin/init.d/agent stop
# /sbin/init.d/agent start

Linux
To register the host’s HBAs with the storage system, on the Linux host, stop and start the Navisphere Agent. For example:

# /etc/init.d/naviagent stop
# /etc/init.d/naviagent start

NetWare
To register the host’s HBAs with the storage system, on the NetWare host, restart the Navisphere Agent. In the NetWare server console screen, enter:

sys:\emc\agent\navagent.nlm -f
sys:\emc\agent\agent.cfg

Solaris
To register the host’s HBAs with the storage system, on the Solaris host, stop and start the Navisphere Agent. For example:

# /etc/init.d/agent stop
# /etc/init.d/agent start

VMware ESX server 2.5.0 and later
To register the host’s HBAs with the storage system, on the VMware host, stop and start the Navisphere Agent. For example:

# /etc/init.d/agent stop
# /etc/init.d/agent start

I have been receiving mail to write on basic storage topic rather than only EMC. Here is first basic thing to know about FC technology.

Fibare Channel is nothing but just a medium to connect host and shared storage. When we talk about SAN first things comes in mind about Fibre Channel.

Fibre Channel is serial data transfer interface intended for connecting shared storage to computer. Where storage is not connected physically to host.

Why FC is most important in SAN? Because FC gives you high speed through the following process:

1) Networking and I/O Protocol such as SCSI command, are mapped to FC construct
2) Encapsulate and transported with FC frame.
3) With this, the hight speed transfer of multiple protocol is possible over same physical interface.

FC operate over copper wire or optical fibre at the rate upto 4GB/s and upto 10GB/s when used as ISL (E - Port) on supported switch.
At the same time, latency is kept very low, minimizing the delay between data requests and deliveries. For example, the latency across a typical FC switch is only a few microseconds. It is this combination of high speed and low latency that makes FC an ideal choice for time-sensitive or transactional processing environments.

These attributes also support high scalability, allowing more storage systems and servers to be interconnected.Fibre Channel is also supports a variety of topologies, and is able to operate between two devices in a simple point-to-point mode, in an economical arbitrated loop to connect up to 126 devices, or (most commonly) in a powerful switched fabric providing simultaneous full-speed connections for many thousands of devices. Topologies and cable types can easily be mixed in the same SAN.

FC is the most important in building SAN, it gives us flexibility to use protocol like FCP, FICON, IP (iSCSI, FCIP, iFCP) and uses block type data transfer.

if we want to define what is FC - Fibre Channel is a storage area networking technology designed to interconnect hosts and shared storage systems within the enterprise. It's a high-performance, high-cost technology. iSCSI is an IP-based storage networking standard that has been touted for the wide range of choices it offers in both performance and price.

Fibre Channel technology is a block-based networking approach based on ANSI standard X3.230-1994 (ISO 14165-1). It specifies the interconnections and signaling needed to establish a network "fabric" between servers, switches and storage subsystems such as disk arrays or tape libraries. FC can carry virtually any kind of traffic.

However, there are some recognized disadvantages to FC. Fibre Channel has been widely criticized for its expense and complexity. A specialized HBA card is needed for each server. Each HBA must then connect to corresponding port on a Fibre Channel Switch. creating the SAN "fabric." Every combination of HBA and switch port can cost thousands of dollars for the storage organization. This is the primary reason why many organizations connect only large, high-end storage systems to their SAN. Once LUNs are created in storage, they must be zoned and masked to ensure that they are only accessible to the proper servers or applications; often an onerous and error-prone procedure. These processes add complexity and costly management overhead to Fibre Channel SANs.

Vendor Worldwide Names WWN :


Twenty-four of the sixty-four bit •World Wide Name• must be unique for every vendor. A partial listing of those vendors most familiar to EMC with regard to Symmetrix Fibre Channel connectivity.

If decoding a HBA WWN, then issue an 8f, command to view the WWN in the FA login table. Bytes 1-3 of the World Wide Names contain the unique vender codes. Note that if there is a switch connected between the FA and the host bus adapter, then the name and fabric servers of the switch will login to the FA. These WWNs can be decoded in the same way as the HBA WWNs.

In the following example the unique vendor code is 060B00, this indicates that the HBA attached was supplied by Hewlett Packard.

UTILITY 8F -- SCSI Adapter utility : TIME: APR/23/01 01:23:30
------------------------------------

HARD LOOP ID : 000 (ALPA=EF) LINK STATE : ONLINE: LOOP
CHIP TYPE/REV: 00/00 Q RECS TOTAL: 3449 CREDIT: 0 RCV BUFF SZ: 2048

IF FLAGS : 01/ TAGD/NO LINK/NO SYNC/NO WIDE/NO NEGO/NO SOFT/NO ENVT/NO CYLN
IF FLAGS1: 08/NO PBAY/NO H300/NO RORD/ CMSN/NO QERR/NO DQRS/NO DULT/NO SUNP
IF FLAGS2: 00/NO SMNS/NO DFDC/NO DMNQ/NO NFNG/NO ABSY/NO SQNT/NO NRSB/NO SVAS
IF FLAGS3: 00/NO SCI3/NO ..../NO ..../NO ..../NO ..../NO ..../NO ..../NO ....

FC FLAGS : 57/ ARRY/ VOLS/ HDAD/NO HDNP/ GTLO/NO PTOP/ WWN /NO VSA
FC FLAGS1: 00/NO VCM /NO CLS2/NO OVMS/NO ..../NO ..../NO ..../NO ..../NO ....
FC FLAGS2: 00/NO ..../NO ..../NO ..../NO ..../NO ..../NO ..../NO ..../NO ....
FC FLAGS3: 00/NO ..../NO ..../NO ..../NO ..../NO ..../NO ..../NO ..../NO ....

HOST SID PORT NAME (WWN) NODE NAME RCV BUF CREDIT CLASS
-----------------------------------------------------------------------------
000001 50060B0000014932 50060B0000014933 992 EE 4 3
PRLI REQ: IFN RXD
DONE.

The following are common HBA vendor codes:

Refer to the open systems host matrix if you need to know whether these HBAs are supported for specific hosts.

00-00-D1 (hex) ADAPTEC INCORPORATED
0000D1 (base 16) ADAPTEC INCORPORATED

00-30-D3 (hex) Agilent Technologies
0030D3 (base 16) Agilent Technologies

00-60-69 (hex) BROCADE COMMUNICATIONS SYSTEMS
006069 (base 16) BROCADE COMMUNICATIONS SYSTEMS

00-02-A5 (hex) Compaq Computer Corporation
0002A5 (base 16) Compaq Computer Corporation

00-60-48 (hex) EMC CORPORATION
006048 (base 16) EMC CORPORATION

00-00-C9 (hex) EMULEX CORPORATION
0000C9 (base 16) EMULEX CORPORATION

00-E0-24 (hex) GADZOOX NETWORKS
00E024 (base 16) GADZOOX NETWORKS

00-60-B0 (hex) HEWLETT-PACKARD CO.
0060B0 (base 16) HEWLETT-PACKARD CO.

00-50-76 (hex) IBM
005076 (base 16) IBM

00-E0-69 (hex) JAYCOR NETWORKS, INC.
00E069 (base 16) JAYCOR NETWORKS, INC.

08-00-88 (hex) MCDATA CORPORATION
080088 (base 16) MCDATA CORPORATION

08-00-0E (hex) NCR CORPORATION
08000E (base 16) NCR CORPORATION

00-E0-8B (hex) QLOGIC CORP.
00E08B (base 16) QLOGIC CORP.

00-00-6B (hex) SILICON GRAPHICS INC./MIPS
00006B (base 16) SILICON GRAPHICS INC./MIPS
,
00-10-9B (hex) VIXEL CORPORATION
00109B (base 16) VIXEL CORPORATION
This information will help you to identify the vendor of particularar HBA's WWN.

Brocade Switches:
How to merge two switches with different active zone sets."

Merging Two B-series Directors and/or Switches with Different Active Zoning Configurations
Before Beginning The following procedure is disruptive to fabric traffic.:
--It will require disabling the switch and the removal of the effective zoning configurations at one step. Removing this configuration will stop the data flow. Since this step in the procedure takes only a few moments to complete, data should resume as soon as the new configuration is activated.
To evaluate the impact on an OS platforms and applications, please refer to the ESN Topology Guide for OS platform timeout recommendations as well as the actual configuration files of the servers to identify their current timeout settings.

Supported Director and Switch Types
The following information on fabric merging applies to the following EMC Director and Switch types:
ED-12000B
DS-32B2
DS-16B2
DS-16B
DS-8B
NOTE: Also applies to similar OEM version of these switch types. See ESM for latest switch firmware qualification prior to merging non-EMC Directors and/or Switches into an EMC SAN.

Host Requirements:
A host computer with a FTP service is required.

Merging

1. Log into the first switch via telnet or WebTools
a. Known as “swo1” for this example
b. For DS-16Bs, DS-8Bs, and comparable switch models running firmware 2.5.0d and above, default access zoning must be set to “ALLACCESS”
NOTE: This is an offline command that will interrupt data flow.
1. Issue switchdisable command
2. Issue configure command
3. Enter “y” when prompted for “Zoning Operation parameters”
4. Enter “1” when prompted for “Default Access”
5. Enter “n” for all other parameters
6. Issue switchenable command
2. Upload the first switch (or one switch of a multi-switch fabric) configuration to a host using FTP
a. Use configupload command or use WebTools
b. Name the file “sw01_config.txt”
1. All zoning and configuration data for this switch will be located in this file.
3. Log into the second switch via telnet or WebTools
a. Known as “sw02” for this example
b. For DS-16Bs, DS-8Bs, and comparable switch models running firmware 2.5.0d and above, default access zoning must be set to “ALLACCESS”
NOTE: This is an offline command that will interrupt data flow.
1. Issue switchdisable command
2. Issue configure command
3. Enter “y” when prompted for “Zoning Operation parameters”
4. Enter “1” when prompted for “Default Access”
5. Enter “n” for all other parameters
6. Issue switchenable command
4. Upload the switch configuration to a host using FTP
a. Use configupload command or use WebTools
b. Name the file “sw02_config.txt”
1. All zoning and configuration data for this switch will be located in this file.
5. Open in a text editor (i.e. Microsoft Word, VI, emacs, etc) for both “sw01_config.txt” and “sw02_config.txt” files
a. The uploaded configuration contains a list of switches in the fabric, list of ISLs, list of ports, name server data, and zoning information.
b. For the purposes of merging, one need only be concerned with the zoning section of the uploaded configuration, which may be found at the end of the file. It contains zones, aliases, and defined and effective configurations.

Example sw01_config.txt Zoning Section
[Zoning]
cfg.cfg_1:zone_1
zone.zone_1:10:00:00:08:00:00:00:01
alias.HBA1:10:00:00:08:00:00:00:01
enable:cfg_1
Example sw02_config.txt Zoning Section
[Zoning]
cfg.cfg_2:zone_2
zone.zone_2:10:00:00:00:09:00:00:02
alias.HBA2:10:00:00:00:09:00:00:02
enable:cfg_2


6. Make a copy of “sw01_config.txt” and rename the copy as “configmerge.txt”
7. Copy aliases from “sw02_config.txt”
a. Highlight and copy the alias data
8. Paste aliases from “sw02_config.txt” to “configmerge.txt”
a. Paste under existing alias data in “configmerge.txt”
9. Copy zones from “sw02_config.txt”
a. Highlight and copy the zone data
10. Paste zones from “sw02_config.txt” to “configmerge.txt”
a. Paste under existing zone data in “configmerge.txt”
11. Copy zone names from “cfg.cfg” line of “[Zoning]” section from “sw02_config.txt” to “configmerge.txt”
a. Copy zone name(s) to “cfg.cfg” line after existing zones separating each zone with a semicolon
b. The last zone name will not be followed by a semicolon

Example Configmerge.txt Zoning Section After Paste from sw02_config.txt
[Zoning]
cfg.cfg_1:zone_1;zone_2
zone.zone_1:10:00:00:08:00:00:00:01
zone.zone_2:10:00:00:00:09:00:00:02
alias.HBA1:10:00:00:08:00:00:00:01
alias.HBA2:10:00:00:00:09:00:00:02
enable:cfg_1


NOTE: Areas highlighted in red above illustrate the additions from “sw02_config.txt”
12. Save changes to “configmerge.txt”
13. Download “configmerge.txt” to sw01
a. Use configdownload command or use WebTools
1. If using configdownload command, the switch must be manually disabled before downloading commences. Use the switchdisable command. After completion, the switch must be manually enabled. Use the switchenable command.
2. Using WebTools automatically disables and re-enables the switch.
b. After downloading, the newly merged configuration is automatically the effective configuration because it is already specified in the “[Zoning]” section as the enabled configuration.
14. Issue cfgsave command on sw01
a. Saves the configuration to flash
15. Issue cfgshow command to see defined and effective zoning configurations
Example Output of cfgshow Command on sw01 After Configmerge.txt is Downloaded

Defined configuration:
cfg: cfg_1 zone_1; zone_2
zone: zone_1 10:00:00:08:00:00:00:01
zone: zone_2 10:00:00:00:09:00:00:02
alias: HBA1 10:00:00:08:00:00:00:01
alias: HBA2 10:00:00:00:09:00:00:02
Effective configuration:
cfg: cfg_1
zone: zone_1 Protocol:ALL 10:00:00:08:00:00:00:01
zone: zone_2 Protocol:ALL 10:00:00:00:09:00:00:02


16. On sw02, issue the following commands to remove both defined and effective zoning configurations
a. cfgdisable
b. cfgclear
c. cfgsave
17. Issue cfgshow command to see defined and effective zoning configurations
Example Output of “cfgshow” Command on Second Switch After Removing the Configuration
Defined configuration:
no configuration defined
Effective configuration:
no configuration in effect
18. Connect the switches via a fiber optic cable to the ports chosen to be E_ports.
a. sw02 will inherit the zoning data from sw01 when they exchange fabric parameters.
NOTE: Be sure to check that both switches have unique Domain IDs. Be sure to check the fabric parameters such as EDTOV, RATOV, Data Field Size, Core Switch PID are identical.
19. Issue cfgshow command on second switch to see defined and effective zoning configurations.
Example Output of cfgshow Command on sw02 After Fabric Merge

Defined configuration:
cfg: cfg_1 zone_1; zone_2
zone: zone_1 10:00:00:08:00:00:00:01
zone: zone_2 10:00:00:00:09:00:00:02
alias: HBA1 10:00:00:08:00:00:00:01
alias: HBA2 10:00:00:00:09:00:00:02
Effective configuration:
cfg: cfg_1
zone: zone_1 Protocol:ALL 10:00:00:08:00:00:00:01
zone: zone_2 Protocol:ALL 10:00:00:00:09:00:00:02


NOTE: Zoning configurations on both switches are now identical.
20. Issue switchshow and fabricshow commands to verify a successful fabric merge

Hope this info will help you to replace a switch in your enviornment or merge.

Lets first disuss what is Powerpath software. If you are familiar with EMC product and then definitelly will be using EMC Powerpath software.
Those who are new to storage world, it will interesting to know about this product as there are only few software in this category like DMP,PVLINK, LVM etc from other vendor. This sofware is one of the most robust compare to other, thats reason EMC generationg more revenue out of this Product.... .

EMC PowerPath software is Host/Server based failover software, what is that mean failover. failover can be anything like server,HBA,Fabric etc. If you have fully licenced package in your enviornment then you will have all capability. Most important not least this software got good feature like dynamic IO Loading and Automatic Failure detection which is missing in other product. Basically in short we can define that EMC PowerPath provides you to have HA configuration. EMC Powerpath slogan is like "set it and forget".

EMC PowerPath features a driver residing on the host above HBA Device Layer. This transparent componenet allows PowerPath to create Virtual(power) devices that provide failure resistant and load balanced paths to storage systems. An application needs only to reference a virt ual device while Powerpath manages path allocation to storage system.
With PowerPath, the route between server and storage system can assume a complex path. One powerpath device include as many as 32 physical I/O paths ( 16 for Clariion), with each path designated to the operating system by different name.
In most cases, the application must be reconfigured to use pseudo devices, otherwise PowerPath load balancing and path failover functionality will not be available.




The following describe whether application need to be reconfigure to use pseudo devices.
1) Windows Plateform :- No. ( Application not require to reconfigured to use Pseudo Devices )
2) AIX :- NO- For LVM, Yes, if applicaiton do not use LVM
3) HP-UX - NO
4) Solaris :- Yes, Incluing filesystem mounting tables and volume managers.
5) Linux :- Same as Solaris
if you attach new LUN to Host, powerpath automatically detect that LUN if you have exposed correctly and create device name like emcpower1c, emcpower2c etc, even when you type command on CLI like
#powermt display dev=all;
you will be able to device entry like emcpowerN.....
Hope this will help you to understand why powerpath uses pseudo devices?

SUN/SOLARIS
___________
To display what HBA's are installed.

#prtdiag -v
#dmesg
#cat /var/adm/messages grep -i wwn more


To set the configuration you must carry out the following:

-changes to the /etc/system file
-HBA driver modifications
-Persistent binding (HBA and SD driver config file)
-EMC recommended changes
-Install the Sun StorEdge SAN Foundation package


Changes to /etc/system: ( Plz ignore 3 equal sign when u edit the file)

SCSI throttle === set sd:sd_max_throttle=20
Enable wide SCSI === set scsi_options=0x7F8
SCSI I/O timeout value === sd:sd_io_time=0x3c (with powerpath)
sd:sd_io_time=0x78 (without powerpath

Changes to HBA driver (/kernel/drv/lpfc.conf):

fcp-bind-WWNN=16
automap=2
fcp-on=1
lun-queue-depth=20
tgt-queue-depth=512
no-device-delay=1 (without PP/DMP) 0 (with PP/DMP)
xmt-que-size=256
scan-down=0
linkdown-tmo=0 (without PP/DMP) 60 (with PP/DMP)

Persistent Binding
Both the lpfc.conf and sd.conf files need to be updated. General format is
name="sd" parent="lpfc" target="X" lun="Y" hba="lpfcZ"
X is the target number that corresponds to the fcp_bindWWNID lpfcZtXY is the LUN number that corresponds to symmetrix volume mapping on the symmetrix port WWN or HLU on the clariionZ is the lpfc drive instance number that corresponds to the fcp_bind_WWID lpfcZtX

To discover the SAN devices
#disk;devlinks;devalias (solaris 2.6)
#devfsadm (solaris 2.8)
#/usr/sbin/update_drv -f sd (solaris 2.9 >)

Windows
To display what HBA's are installed. use admin tool "device manager"
To set the configuration you must carry out the following:
#Registry edits
#EMC recommended changes
#Install emulex exlcfg utility

Arbitrated loop without powerpath/ATF:-

InitLinkFlags=0x00000000 (arbitrated loop, auto-link speed)
WaitReady=45
LinkDown=45
TranslateQueueFull=1

Arbitrated loop with powerpath/ATF:

InitLinkFlags=0x00000000 (arbitrated loop, auto-link speed)
WaitReady=10
LinkDown=10

Fabric without powerpath/ATF:

InitLinkFlags=0x00000002 (fabric, auto-link speed
WaitReady=45
LinkDown=45
TranslateQueueFull=1

Fabric with powerpath/ATF:

InitLinkFlags=0x00000002 (fabric, auto-link speed)
WaitReady=10
LinkDown=10

Modifying the EMC environment :
In the shortcut for the elxcfg add the "--emc" option to the target option.

To discover the SAN devices
control panel -> admin tools -> computer management -> select disk management -> (top menu)action -> rescan tools

HP

To display what HBA's are installed.

#/opt/fcms/bin/fcmsutil /dev/td# (A5158A HBA)
#/opt/fc/bin/fcutil /dev/fcs# (A6685A HBA)

On a HP system there is no additional software to install. The HP systems Volume address setting must be enabledon the SAN, you can check this will the following command.

#symcfg -sid -FA all list (confirm that the volume set addressing is set to yes)

To discover the SAN devices:
#ioscan -fnC disk (scans hardware busses for devices according to class)
#insf -e (install special device files)

AIX

To display what HBA's are installed
#lscfg
#lscfg -v -l fcs*


To set the configuration you must carry out the following:
#List HBA WWN and entry on system
#Determine code level of OS and HBA
#Download and install EMC ODM support fileset

#run /usr/lpp/Symmetrix/bin/emc_cfgmgr (symmetrix) #or /usr/lpp/emc/CLARiiON/bin/emc_cfgmgr (clariion)

To discover the SAN devices
#/usr/lpp/EMC/Symmetrix/bin/emc_cfgmgr -v
if the above does not work reboot server

Hope this will be documented and useful info for novice user.

iSCSI details

Posted by Diwakar ADD COMMENTS

I tried to collect some good information on iSCSI driver details which was request by some reader. Hope this will help you on iSCSI queries. Leave comment if it is useful... I will try to write iSCSI overview in next entry. Happy Reading!!!!!!

The iSCSI driver provides a transport for SCSI requests and responses to storage devices via an IP network instead of using a direct attached SCSI bus channel or an FC connection. The SN 5400 Series Storage Router, in turn, transports these SCSI requests and responses received via the IP network between it and the storage devices attached to it. Once the iSCSI driver is installed, the host will proceed with a discovery process for storage devices as follows:
1. The iSCSI driver requests available targets through the SendTargets discovery mechanism as configured in the /etc/iscsi.conf configuration file.
2. Each iSCSI target sends available iSCSI target names to the iSCSI driver.
3. The iSCSI driver discovery daemon process looks up each discovered target

in the /etc/iscsi.bindings file. If an entry exists in the file for the target, the corresponding SCSI target ID is assigned to the target. If no entry exists for the target, the smallest available SCSI target ID is assigned and an entry is written to the /etc/iscsi.bindings file. The driver then sends a login request to the iSCSI target.
4. The iSCSI target accepts the login and sends target identifiers.
5. The iSCSI driver queries the targets for device information.
6. The targets respond with the device information.
7. The iSCSI driver creates a table of available target devices.
Once the table is completed, the iSCSI targets are available for use by the
host using all the same commands and utilities as a direct attached (e.g., via
a SCSI bus) storage device.


- All Linux kernels released on or before Feb 4, 2002 have a known bug in the buffer and page cache design. When any writes to a buffered block device fail, it is possible for the unwritten data to be discarded from the caches, even though the data was never written to disk. Any future reads will get the prior contents of the disk, and it is possible for applications to get no errors reported.
This occurs because block I/O write failures from the buffer cache simply mark the buffer invalid when the write fails. This leaves the buffer marked clean and invalid, and it may be
discarded from the cache at any time. Any future read either finds no existing buffer or finds the invalid buffer, so the read will fetch old data from disk and place it in the cache. If the fsync(2) function initiated the write, an error may be returned. If memory pressure on the cache initiated the write, the unwritten buffer may be discarded before fsync(2) is ever called, and in that case fsync will be unaware of the data loss, and will incorrectly report success. There is currently no reliable way for an application to ensure that data written to buffered block devices has actually been written to disk. Buffered data may be lost whenever a buffered
block I/O device fails a write. The iSCSI driver attempts to avoid this problem by retrying disk
commands for many types of failures. The MinDiskCommandTimeout defaults to "infinite", which disables the command timeout, allowing commands to be retried forever if the storage device is unreachable or unresponsive.
- All Linux kernels up to and including 2.4.20 have a bug in the SCSI device initialization code. If kernel memory is low, the initialization code can fail to allocate command blocks needed for proper operation, but will do nothing to prevent I/O from being queued to the non-functional device. If a process queues an I/O request to a SCSI device that has no command blocks allocated, that process will block forever in the kernel, never exiting and ignoring all signals sent to it while blocked. If the LUN probes initiated by the iSCSI driver are blocked forever by this problem, it will not be possible to stop or unload the iSCSI driver, since the driver code will still
be in use. In addition, any other LUN probes initiated by the iSCSI driver will also block, since any other probes will lock waiting for the probe currently in progress to finish. When the failure to allocate command blocks occurs, the kernel will log a message similar to the following:
***************************************************************
kernel: scsi_build_commandblocks: want=12, space for=0 blocks
In some cases, the following message will also be logged:
kernel: scan_scsis: DANGER, no command blocks
***************************************************************
- Linux kernels 2.2.16 through 2.2.20 and 2.4.0 through 2.4.18 are known to have a problem in the SCSI error recovery process. In some cases, a successful device reset may be ignored and the SCSI layer will continue on to the later stages of the error recovery process. The problem occurs when multiple SCSI commands for a particular device are queued in the low-level SCSI driver when a device reset occurs. Even if the low-level driver correctly reports that all the commands for the device have been completed by the reset, Linux will assume only one command has been completed and continue the error recovery process. (If only one command has timed out or failed, Linux will correctly terminate the error recovery process following
the device reset.) This action is undesirable because the later stages of error recovery may send other types of resets, which can affect other SCSI initiators using the same target or other targets on the same bus. It is also undesirable because there are more serious bugs in the later stages of the Linux SCSI error recovery process. The Linux iSCSI driver now attempts to avoid this problem by replacing the usual error recovery handler for SCSI commands that timeout or fail.
- Linux kernels 2.2.16 through 2.2.20 and 2.4.0 through 2.4.2 may take SCSI devices offline after Linux issues a reset as part of the error recovery process. Taking a device offline causes all I/O to the device to fail until the HBA driver is reloaded. After the error recovery process does a reset, it sends a SCSI Test Unit Ready command to check if the SCSI target is operational
again. If this command returns SCSI sense data, instead of correctly retrying the command, Linux will treat it as a fatal error, and immediately take the SCSI device offline.

The Test Unit Ready will almost always be returned with sense data because most targets return a deferred error in the sense data of the first command received after a reset. This is a way of telling the initiator that a reset has occurred. Therefore, the affected Linux kernel versions almost always take a SCSI device offline after a reset occurs.
This bug is fixed in Linux kernels 2.4.3 and later. The Linux iSCSI driver now attempts to avoid this problem by replacing the usual error recovery handler for SCSI commands that timeout or fail.
- Linux kernels 2.2.16 through 2.2.21 and 2.4.0 through 2.4.20 appear to have problems when SCSI commands to disk devices are completed with a check condition/unit attention containing deferred sense data. This can result in applications receiving I/O errors, short reads or short writes. The Linux SCSI code may deal with the error by giving up reading or writing the first buffer head of a command, and retrying the remainder of the I/O.
The Linux iSCSI driver attempts to avoid this problem by translating deferred sense data to current sense data for commands sent to disk devices.
- Linux kernels 2.2.16 through 2.2.21 and 2.4.0 through 2.4.20 may crash on a NULL pointer if a SCSI device is taken offline while one of the Linux kernel's I/O daemons (e.g. kpiod, kflushd, etc.) is trying to do I/O to the SCSI device. The exact cause of this problem is still being investigated.
Note that some of the other bugs in the Linux kernel's error recovery handling may result in a SCSI device being taken offline, thus triggering this bug and resulting in a Linux kernel crash.
- Linux kernels 2.2.16 through 2.2.21 running on uniprocessors may hang if a SCSI disk device node is opened while the Linux SCSI device structure for that node is still being initialized.
This occurs because the sd driver which controls SCSI disks will loop forever waiting for a device busy flag to be cleared at a certain point in the open routine for the disk device. Since this particular loop will never yield control of the processor, the process initializing the SCSI disk device is not allowed to run, and the initialization process can never clear the device busy flag which the sd driver is constantly checking.
A similar problem exists in the SCSI generic driver in some 2.4 kernel versions. The sg driver may crash on a bad pointer if a /dev/sg* device is opened while it is being
initialized.
- Linux kernels prior to 2.4.20-8 (Redhat 9 distribution) had a problem of a rare occurrence of data corruption. This data can be buffer cache data as well as raw I/O data. This problem occurs when iSCSI driver sends the I/O request down to TCP. Linux iSCSI driver handles this problem by copying the incoming I/O buffer temporarily in an internal buffer and then sending the copied data down to TCP. This way the iSCSI driver keeps the original data intact. In case, this sent data gets corrupted (this gets detected by turning on CRC), the driver repeats the foregoing process.
The iSCSI Driver Version 3.2.1 for Linux is compatible with SN 5400 Series Storage Routers running software version 3.x or greater. It is not compatible with SN 5400 Series Storage Routers running software versions 1.x or 2.x.
===============================================================================
CONFIGURING AND USING THE DRIVER
===============================================================================
This section describes a number of topics related to configuring and using the iSCSI Driver for Linux. The topics covered include:
Starting and Stopping the iSCSI driver
Rebooting Linux
Device Names
Auto-Mounting Filesystems
Log Messages
Dynamic Driver Reconfiguration
Target Portal Failover
iSCSI HBA Status
Using Multipath I/O Software
Making Storage Configuration Changes
Target and LUN Discovery Limits
Dynamic Target And LUN Discovery
Persistent Target Binding
Target Authentication
Editing The iscsi.conf File
iSCSI Commands and Utilities
Driver File Listing
--------------------------------------
STARTING AND STOPPING THE iSCSI DRIVER
--------------------------------------
To manually start the iSCSI driver enter:
/etc/init.d/iscsi start
The iSCSI initialization will report information on each detected
device to the console or in dmesg(8) output. For example:

********************************************************************
Vendor: SEAGATE Model: ST39103FC Rev: 0002
Type: Direct-Access ANSI SCSI revision: 02
Detected scsi disk sda at scsi0, channel 0, id 0, lun 0
SCSI device sda: hdwr sector= 512 bytes.
Sectors= 17783240 [8683 MB] [8.7 GB]
sda: sda1
********************************************************************
The directory /proc/scsi/iscsi will contain a file (the controller
number) that contains information about the iSCSI devices.

To see the iscsi devices currently available on this system, use the utility:
/usr/local/sbin/iscsi-ls -l
If there are problems loading the iSCSI kernel module, diagnostic information will be placed in /var/log/iscsi.log.
To manually stop the iSCSI driver enter:
/etc/init.d/iscsi stop
When the driver is stopped, the init.d script will attempt to kill all processes using iSCSI devices by first sending them "SIGTERM" and then by sending any remaining processes "SIGKILL". The init.d script will then unmount all iSCSI devices in /etc/fstab.iscsi and kill the iSCSI daemon terminating all connections to iSCSI devices. It is important to note that the init.d script may not be able to successfully unmount filesystems if they are in use by processes that can't be killed. It is recommended that the you manually stop all applications using the filesystem on iSCSI devices before stopping the driver. Filesystems not listed in /etc/fstab.iscsi will not be unmounted by the script and should be manually unmounted prior to a system shutdown.
It is very important to unmount all filesystems on iSCSI devices before stopping the iSCSI driver. If the iSCSI driver is stopped while iSCSI devices are mounted, buffered writes may not be committed to disk and file system corruption may occur.
---------------
REBOOTING LINUX
---------------
The Linux "reboot" command should not be used to reboot the system while iSCSI devices are mounted or being used since the "reboot" command will not execute the iSCSI shutdown script in /etc/rc6.d/ and file system corruption may occur. To safely reboot a Linux system, enter the
following command:
/sbin/shutdown -r now
All iSCSI devices should be unmounted prior to a system shutdown or reboot.
------------
DEVICE NAMES
------------
Because Linux assigns SCSI device nodes dynamically whenever a SCSI logical unit is detected, the mapping from device nodes (e.g., /dev/sda or /dev/sdb) to iSCSI targets and logical units may vary.
Variations in process scheduling and network delay may result in iSCSI targets being mapped to different SCSI device nodes every time the driver is started. Because of this variability, configuring applications or operating system utilities to use the standard SCSI device nodes to access iSCSI devices may result in SCSI commands being sent to the wrong target or logical unit.
To provide a more reliable namespace, the iSCSI driver scans the system to determine the mapping from SCSI device nodes to iSCSI targets, and then creates a tree of directories and symbolic links under /dev/iscsi to make it easier to use a particular iSCSI target's logical units.
Under /dev/iscsi, there will be a directory tree containing subdirectories for each iSCSI bus number, each target id number on the bus, and each logical unit number for each target. For
example, the whole disk device for bus 0, target id 0, LUN 0 would be /dev/iscsi/bus0/target0/lun0/disk.
In each logical unit directory there will be a symbolic link for each SCSI device node that may be connected to that particular logical unit. These symbolic links are modeled after the Linux
devfs naming convention.
The symbolic link 'disk' will map to the whole-disk SCSI device node
(e.g., /dev/sda, /dev/sdb, etc.).
The symbolic links 'part1' through 'part15' will map to each
partition of that SCSI disk (e.g., /dev/sda1, dev/sda15, etc.).
Note that these links will exist regardless of the number of disk partitions. Opening the partition devices will result in an error if the partition does not actually exist on the disk.
The symbolic link 'mt' will map to the auto-rewind SCSI tape device node for this LUN (e.g., /dev/st0), if any. Additional links for 'mtl', 'mtm', and 'mta' will map to the other auto-rewind devices (e.g., /dev/st0l, /dev/st0m, /dev/st0a), regardless of whether these
device nodes actually exist or could be opened. The symbolic link 'mtn' will map to the no-rewind SCSI tape device node for this LUN (e.g., /dev/nst0), if any. Additional links for 'mtln', 'mtmn', and 'mtan' will map to the other no-rewind devices (e.g., /dev/nst0l, /dev/nst0m, /dev/nst0a), regardless of whether those device nodes actually exist or could be opened. The symbolic link 'cd' will map to the SCSI cdrom device node for this LUN (e.g., /dev/scd0), if any.
The symbolic link 'generic' will map to the SCSI generic device
node for this LUN (e.g., /dev/sg0), if any.
Because the symlink creation process must open all of the SCSI
device nodes in /dev in order to determine which nodes map to
iSCSI devices, you may see many modprobe messages logged to syslog
indicating that modprobe could not find a driver for a particular
combination of major and minor numbers. This is harmless, and can
be ignored. The messages occur when Linux is unable to find a
driver to associate with a SCSI device node that the iSCSI daemon
is opening as part of it's symlink creation process. To prevent
these messages, the SCSI device nodes with no associated high-level
SCSI driver can be removed.
-------------------------
AUTO-MOUNTING FILESYSTEMS
-------------------------
Filesystems installed on iSCSI devices cannot be automatically mounted at
system reboot due to the fact that the IP network is not yet configured at
mount time. However, the driver provides a method to auto-mount these
filesystems as soon as the iSCSI devices become available (i.e., after the IP
network is configured).
To auto-mount a filesystem installed on an iSCSI device, follow these steps:
1. List the iSCSI partitions to be automatically mounted in
/etc/fstab.iscsi which has the same format as /etc/fstab. The
/etc/fstab.iscsi file will not be overwritten when the driver is
installed nor will removing the current version of the driver delete
/etc/fstab.iscsi. It is left untouched during an install.
2. For each filesystem on each iscsi device(s), enter the logical volume on
which the filesystem resides. The mount points must exist for the
filesystems to be mounted. For example, the following /etc/fstab.iscsi
entries will mount the two iSCSI devices specified (sda and sdb):
*************************************************************************
#device mount FS mount backup fsck
#to mount point type options frequency pass
/dev/sda /mnt/t0 ext2 defaults 0 0
/dev/sdb /mnt/t1 ext2 defaults 0 0
*************************************************************************
3. Upon a system restart, the iSCSI startup script invokes the
iscsi-mountall script will try to mount iSCSI devices listed in
/etc/fstab.iscsi file. iscsi-mountall tries to mount the iSCSI devices
for "NUM_RETRIES" (default value 10) number of times, at an interval of
"SLEEP_INTERVAL" seconds (default value 1) between each attempt, giving
the driver the time to establish a connection with an iSCSI target.
The value of these parameters can be changed in the iscsi-mountall script
if the devices are not getting configured in the system within the
default time periods.
Due to variable network delays, targets may not always become available in the
same order from one boot to the next. Thus, the order in which iSCSI devices
are mounted may vary and may not match the order the devices are listed in
/etc/fstab.iscsi You should not assume mounts of iSCSI devices will occur in
any particular order.
Because of the variability of the mapping between SCSI device nodes
and iSCSI targets, instead of directly mounting SCSI device nodes,
it is recommended to either mount the /dev/iscsi tree symlinks,
mount filesystem UUIDs or labels (see man pages for mke2fs, mount,
and fstab), or use logical volume management (see Linux LVM) to
avoid mounting the wrong device due to device name changes resulting
from iSCSI target configuration changes or network delays.
------------
LOG MESSAGES
------------
The iSCSI driver contains components in the kernel and user level.
The log messages from these components are sent to syslog. Based on the
syslogd configuration on the Linux host, the messages will be sent to the
appropriate destination. For example, if /etc/syslog.conf has the following
entry:

*.info /var/log/messages
then all log messages of level 'info' or higher will be sent to
/var/log/messages.

If /etc/syslog.conf has the following entry:
*.info;kern.none /var/log/messages
then all log messages (except kernel messages) of level info or higher
will be sent to /var/log/messages.
If /etc/syslog.conf has the following entry:
kern.* /dev/console
then all kernel messages will be sent to the console.
All messages from the iSCSI driver when loading the iSCSI kernel
module will be placed in /var/log/iscsi.log.
The user can also use dmesg(8) to view the log messages.
------------------------------
DYNAMIC DRIVER RECONFIGURATION
------------------------------
Configuration changes can be made to the iSCSI driver without having to stop
it or reboot the host system. To dynamically change the configuration of the
driver, follow the steps below:
1. Edit /etc/iscsi.conf with the desired configuration changes.
2. Enter the following command:
/etc/init.d/iscsi reload
This will cause the iSCSI daemon to re-read /etc/iscsi.conf file and to
create any new DiscoveryAddress connections it finds. Those discovery
sessions will then discover targets and create new target connections.
Note that any configuration changes will not affect existing target sessions.
For example, removal of a DiscoveryAddress entry from /etc/iscsi.conf
will not cause the removal of sessions to targets discovered through this
DiscoveryAddress, but it will cause the removal of the discovery session
corresponding to the deleted DiscoveryAddress.
----------------------
TARGET PORTAL FAILOVER
----------------------
Some SN 5400 Series Storage Routers have multiple Gigabit Ethernet ports.
Those systems may be configured to allow iSCSI target access via multiple
paths. When the iSCSI driver discovers targets through a multi-port SN 5400
Series system, it also discovers all the IP addresses that can be used to
reach each of those targets.
When an existing target connection fails, the iSCSI driver will attempt to
connect to that target using the next available IP address. You can also
choose a preferred portal to which the iSCSI driver should attempt to connect
to when the iSCSI driver is started or whenever automatic portal failover
occurs. This is significant in a situation when you want the connection
to the targets to be made through a faster network portal (for example, when
the I/Os are going through a Gigabit Ethernet interface and you do not
prefer the connection to failover to a slower network interface).
The preference for portal failover can be specified through the
"PreferredPortal" or "PreferredSubnet" parameter in /etc/iscsi.conf.
If this preference is set, then on any subsequent failover the driver will
first try to failover to the preferred portal or preferred subnet whichever
is specified in the conf file. If both preferred portal and preferred subnet
entries are present in the conf file then the preferred portal takes
precedence. If the preferred portal or preferred subnet is unreachable,
then the driver will continuously rotate through the list of available
portals until it finds one that is active.
The Portal Failover feature is turned on by default and the whole process of
failover occurs automatically. You can chose to turn off portal failover
by disabling the portal failover parameter in /etc/iscsi.conf.
If a target advertises more than one network portal, you can manually
switch portals by writing to the HBA's special file in /proc/scsi/iscsi/.
For example, if a target advertises two network portals:
10.77.13.248:3260 and 192.168.250.248:3260.
If the device is configured with targetId as 0, busId as 0, HBA's host
number is 3 and you want to switch the target from
10.77.13.248 to 192.169.250.248, use the following command:
echo "target 0 0 address 192.168.250.248" > /proc/scsi/iscsi/3
Where the syntax is:
echo "target address " >
/proc/scsi/iscsi/
The host system must have multiple network interfaces to effectively
utilize this failover feature.
----------------
iSCSI HBA STATUS
----------------
The directory /proc/scsi/iscsi will contain a special file that can be
used to get status from your iSCSI HBA. The name of the file will
be the iSCSI HBA's host number, which is assigned to the driver
by Linux.

When the file is read, it will show the driver's version number,
followed by a list all iSCSI targets and LUNs the driver has found
and can use.
Each line will show the iSCSI bus number, target id number, and
logical unit number, as well as the IP address, TCP port, and
iSCSI TargetName. If an iSCSI session exists, but no LUNs have
yet been found for a target, the LUN number field will contain a
question mark. If a TCP connection is not currently established,
the IP address and port number will both appear as question marks.
----------------------------
USING MULTIPATH I/O SOFTWARE
----------------------------
If a third-party multipath I/O software application is being used in
conjunction with the iSCSI driver (e.g., HP Secure Path), it may be
necessary to modify the configuration of the driver to allow the
multi-pathing software to operate more efficiently. If you are using
a multipath I/O application, you may need to set the "ConnFailTimeout"
parameter of the iSCSI driver to a smaller value so that SCSI commands
will fail more quickly when an iSCSI network connection drops allowing
the multipath application to try a different path to for access to the
storage device. Also, you may need to set the "MaxDiskCommandTimeout"
to a smaller value (e.g., 5 or 10 seconds), so that SCSI commands to
unreachable or unresponsive devices will fail more quickly and the
multipath software will know to try a different path to the storage device.
Multipath support in the iSCSI driver can be turned on by setting
Multipath=<"yes" or "portal" or "portalgroup"> in /etc/iscsi.conf.
If Multipath=<"yes" or "portal">, then the discovered targets that
are configured to allow access via multiple paths will have a separate
iSCSI session created for each path (i.e., iSCSI portal). The target
portal failover feature should not be used if Multipath=<"yes" or "portal">
since multiple sessions will be established with all available paths.
------------------------------------
MAKING STORAGE CONFIGURATION CHANGES
------------------------------------
Making changes to your storage configuration, including adding or
removing targets or LUNs, remapping targets, or modifying target
access, may change how the devices are presented to the host operating
system. This may require corresponding changes in the iSCSI driver
configuration and /etc/vfstab file.
It is important to understand the ramifications of SCSI routing
service configuration changes on the hosts accessing the associated
storage devices. For example, changing the instance configuration
may change the device presentation to the host's iSCSI driver,
effectively changing the name or number assigned to the device
by the host operating system. Certain configuration changes,
such as adding or deleting targets, adding or deleting LUNs
within a particular target, or adding or deleting entire instances
may change the order of the devices presented to the host.
Even if the host is only associated with one SCSI routing
service instance, the device order could make a difference.
Typically, the host operating system assigns drive identifications
in the order they are received based on certain criteria. Changing
the order of the storage device discovery may result in a changed
drive identification. Applications running on the host may require
modifications to appropriately access the current drives.
If an entire SCSI routing service instance is removed, or there
are no targets available for the host, the host's iSCSI driver
configuration file must be updated to remove the appropriate
reference before restarting the iSCSI driver. If a host's iSCSI
configuration file contains an IP address of a SCSI routing
service instance that does not exist, or has no targets available
for the host, the iSCSI driver will not complete a login and
will keep on trying to discover targets associated with this SCSI
routing service instance.
In general, the following steps are normally required when reconfiguring
iSCSI storage:
1. Unmount any filesystems and stop any applications using iSCSI
devices.
2. Stop the iSCSI driver by entering:
/etc/init.d/iscsi stop
3. Make the appropriate changes to the iSCSI driver
configuration file. Remove any references to iSCSI
DiscoveryAddresses that have been removed, or that
no longer have valid targets for this host.
4. Modify /etc/fstab.iscsi and application configurations as
appropriate.
5. Restart the iSCSI driver by entering:
/etc/init.d/iscsi start
Failure to appropriately update the iSCSI configuration using
the above procedure may result in a situation that prevents
the host from accessing iSCSI storage resources.
-------------------------------
TARGET AND LUN DISCOVERY LIMITS
-------------------------------
The bus ID and target ID are assigned by the iSCSI initiator driver
whereas the lun ID is assigned by the iSCSI target. The driver provides
access to a maximum of 256 bus IDs with each bus supporting 256 targets
and each target capable of supporting 256 LUNs. Any discovered iSCSI
device will be allocated the next available target ID on bus 0.
If a target ID > 256 on bus 0, then a next available target ID on bus 1
will be allocated. If a bus ID > 256 and LUN ID > 256 it will be ignored
by the driver and will not be configured in the system.
--------------------------------
DYNAMIC TARGET AND LUN DISCOVERY
--------------------------------
When using iSCSI targets that support long-lived iSCSI discovery sessions,
such as the Cisco 5400 Series, the driver will keep a discovery session
open waiting for change notifications from the target. When a notification
is received, the driver will rediscover targets, add any new targets, and
activate LUNs on all targets.
If a new LUN is dynamically added to an existing target on a SCSI routing
instance with which the driver has established a connection, then the driver
does not automatically activate the new LUN. The user can manually activate
the new LUN by executing the following command:
echo "scsi add-single-device " >
/proc/scsi/scsi
where;
HBA#: is the controller number present under /proc/scsi/iscsi/
bus-id: is the bus number present on controller .
target-id: is the target ID present on ,.
LUN: new LUN added dynamically to the target.

-------------------------
PERSISTENT TARGET BINDING
-------------------------
This feature ensures that the same iSCSI bus and target id number are used
for every iSCSI session to a particular iSCSI TargetName, and a Linux SCSI
target always maps to the same physical storage device from one reboot to
the next.
This feature ensures that the SCSI numbers in the device symlinks described
above will always map to the same iSCSI target.
Note that because of the way Linux dynamically allocates SCSI device nodes
as SCSI devices are found, the driver does not and cannot ensure that any
particular SCSI device node (e.g., /dev/sda) will always map to the same
iSCSI TargetName. The symlinks described in the section on Device Names are
intended to provide a persistent device mapping for use by applications and
fstab files, and should be used instead of direct references to particular
SCSI device nodes.
The file /etc/iscsi.bindings is used by the iSCSI daemon to store bindings of
iSCSI target names to SCSI target ID's. If the file doesn't exist,
it will be created when the driver is started. If an entry exists for a
discovered target, the Linux target ID from the entry is assigned to the
target. If no entry exists for a discovered target, an entry is written to
the file. Each line of the file contains the following fields:
BusId TargetId TargetName
An example file would be:
*****************************************************************************
0 0 iqn.1987-05.com.cisco.00.7e9d6f942e45736be69cb65c4c22e54c.disk_one
0 1 iqn.1987-05.com.cisco.00.4d678bd82965df7765c788f3199ac15f.disk_two
0 2 iqn.1987-05.com.cisco.00.789ac4483ac9114bc6583b1c8a332d1e.disk_three
*****************************************************************************
Note that the /etc/iscsi.bindings file will permanently contain entries
for all iSCSI targets ever logged into from this host. If a target is
no longer available to a host you can manually edit the file and remove
entries so the obsolete target no longer consumes a SCSI target ID.
If you know the iSCSI target name of a target in advance, and you want
it to be assigned a particular SCSI target ID, you can add an entry
manually. You should stop the iSCSI driver before editing the
/etc/iscsi.bindings file. Be careful to keep an entire entry on a single
line, with only whitespace characters between the three fields. Do not
use a target ID number that already exists in the file.
*****************************************************************************
NOTE: iSCSI driver versions prior to 3.2 used the file /var/iscsi/bindings
instead of /etc/iscsi.bindings. The first time you start the new driver
version, it will change the location and the name of the bindings file
to /etc/iscsi.bindings
*****************************************************************************
---------------------
TARGET AUTHENTICATION
---------------------
The CHAP authentication mechanism provides for two way authentication between
the target and the initiator. The authentication feature on the SN 5400
system has to be enabled to make use of this feature. The username and
password for both initiator side and target side authentication needs to be
listed in /etc/iscsi.conf. The username and password can be specified as
global values or can be made specific to the target address. Please refer to
the Editing The iscsi.conf File section of this document for a more detailed
description of these parameters.
---------------------------
EDITING THE ISCSI.CONF FILE
---------------------------
The /etc/iscsi.conf file is used to control the operation of the iSCSI driver
by allowing the user to configure the values for a number of programmable
parameters. These parameters can be setup to apply to specific configuration
types or they can be setup to apply globally. The configuration types that are
supported are:
- DiscoveryAddress = SCSI routing instance IP address with format a.d.c.d or
a.b.c.d:n or hostname.
- TargetName = Target name in 'iqn' or 'eui' format
eg: TargetName = iqn.1987-05.com.cisco:00.0d1d898e8d66.t0
- TargetIPAddress = Target name with format a.b.c.d/n
- Subnet = Network portal IP address with format a.b.c.d/n or a.b.c.d&hex
- Address = Network portal IP address with format a.b.c.d/32
The complete list of parameters that can be applied either globally or to the
configuration types listed above are shown below. Not all parameters are
applicable to all configuration types.
- Username = CHAP username used for initiator authentication by the target.
- OutgoingUsername = <>
- Password = CHAP password used for initiator authentication by the target.
- OutgoingPassword = <>
- IncomingUsername = CHAP username for target authentication by the initiato
r.
- IncomingPassword = CHAP password for target authentication by the initiato
r.
- HeaderDigest = Type of header digest support the initiator is requesting
of the target.
- DataDigest = Type of data digest support the initiator is requesting of
the target.
- PortalFailover = Enabling/disabling of target portal failover feature.
- PreferredSubnet = IP address of the subnet that should be used for a
portal failover.
- PreferredPortal = IP address of the portal that should be used for a
portal failover.
- Multipath = Enabling/disabling of multipathing feature.
- LoginTimeout = Time interval to wait for a response to a login request to
be received from a target before failing a connection
attempt.
- AuthTimeout = Time interval to wait for a response to a login request
containing authentication information to be received from a
target before failing a connection attempt.
- IdleTimeout = Time interval to wait on a connection with no traffic before
sending out a ping.
- PingTimeout = Time interval to wait for a ping response after a ping is
sent before failing a connection.
- ConnFailTimeout = Time interval to wait before failing SCSI commands back
to an application for unsuccessful commands.
- AbortTimeout = Time interval to wait for a abort command to complete
before declaring the abort command failed.
- ResetTiemout = Time interval to wait for a reset command to complete
before declaring the reset command failed.
- InitialR2T = Enabling/disabling of R2T flow control with the target.
- MaxRecvDataSegmentLength = Maximum number of bytes that the initiator can
receive in an iSCSI PDU.
- FirstBurstLength = Maximum number of bytes of unsolicited data the
initiator is allowed to send.
- MaxBurstLength = Maximum number of bytes for the SCSI payload negotiated
by initiator.
- TCPWindowSize = Maximum number of bytes that can be sent over a TCP
connection by the initiator before receiving an
acknowledgement from the target.
- Continuous = Enabling/disabling the discovery session to be kept alive.
A detailed description for each of these parameters is included in both the
man page and the included sample iscsi.conf file. Please consult these sources
for examples and more detailed programming instructions.
----------------------------
iSCSI COMMANDS AND UTILITIES
----------------------------
This section gives a description of all the commands and utilities available
with the iSCSI driver.
- "iscsi-ls" lists information about the iSCSI devices available to the
driver. Please refer to the man page for more information.
-------------------

You have two fabrics running off of two switches. You'd like to make them one fabric. How to do that? For the most part, it's simply connecting the two switches via e_ports.

Before doing that, however, realize there's several factors that can prevent them from mergingg

  1. Incompatible operating parameters such as RA_TOV and ED_TOV
  2. Duplicate domain IDs.
  3. Incompatible zoning configurations
  4. No principal switch (priority set to 255 on all switches)
  5. No response from the switch (hello sent every 30 seconds)

To avoid the issues above:

  1. Check IPs on all Service Processors and switches; deconflict as necessary.
  2. Ensure that all switches have unique domain ids.
  3. Ensure that operating parameters are the same.
  4. Ensure there aren't any zoning conflicts in the fabric (port zones, etc).

Once that's done:

  1. Physically link the switches
  2. View the active zone set to ensure the merge happens.
  3. Save the active zone set
  4. Activate the new zone set.

About Me

My photo
Sr. Solutions Architect; Expertise: - Cloud Design & Architect - Data Center Consolidation - DC/Storage Virtualization - Technology Refresh - Data Migration - SAN Refresh - Data Center Architecture More info:- diwakar@emcstorageinfo.com
Blog Disclaimer: “The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.”
EMC Storage Product Knowledge Sharing