Showing posts with label Zone. Show all posts
Showing posts with label Zone. Show all posts

Lets discuss about most important thing in SAN environment ZONING. Zoning is the only way to restrict access for storage to all the host. We will be discussing about Zoning in details.

There are two type of Zoning basically : Hard Zoning and Soft Zoning. Lets first define what is Zoning??

Zoning is nothing but map of host to device to device connectivity is overlaid on the storage networking fabric, reducing the risk of unauthorized access.Zoning supports the grouping of hosts, switches, and storage on the SAN, limiting access between members of one zone and resources in another.

Zoning also restricts the damage from unintentional errors that can corrupt storage allocations or destabilize the network. For example, if a Microsoft Windows server is mistakenly connected to a fabric dedicated to UNIX applications, the Windows server will write header information to each visible LUN, corrupting the storage for the UNIX servers. Similarly, Fibre Channel register state change notifications (RSCN) that keep SAN entities apprised of configuration changes, can
sometimes destabilize the fabric. Under certain circumstances, an RSCN storm will overwhelm a
switch’s ability to process configuration changes, affecting SAN performance and availability for
all users. Zoning can limit RSCN messages to the zone affected by the change, improving overall
SAN availability.

By segregating the SAN, zoning protects applications against data corruption, accidental access,
and instability. However, zoning has several drawbacks that constrain large-scale consolidated
infrastructures.

Lets first discuss what are type of Zoning and pro and cos:

As I have mentioned earlier that Zoning got two types basically you can say three but only 2 types popular in industry.

1) Soft Zoning 2) Hard Zoning 3) Broadcast Zoning

Soft Zoning : Soft zoning uses the name server to enforce zoning. The World Wide Name (WWN) of the elements enforces the configuration policy.
Pros:
- Administrators can move devices to different switch ports without manually reconfiguring
zoning. This is major flexibility to administrator. You don't need to change once you create zone set for particular device connected on switch. You create a zone set on switch and allocate storage to host. You can change any port for device connectivity

Cons:
- Devices might be able to spoof the WWN and access otherwise restricted resources.
- Device WWN changes, such as the installation of a new Host Bus Adapter (HBA) card, require
policy modifications.
- Because the switch does not control data transfers, it cannot prevent incompatible HBA
devices from bypassing the Name Server and talking directly to hosts.

Hard Zoning: - Hard Zoning uses the physical fabric port number of a switch to create zones and enforce the policy.

Pros:

- This system is easier to create and manage than a long list of element WWNs.
- Switch hardware enforces data transfers and ensures that no traffic goes between
unauthorized zone members.
- Hard zoning provides stronger enforcement of the policy (assuming physical security on the
switch is well established).

Cons:
- Moving devices to different switch ports requires policy modifications.

Broadcast Zoning: · Broadcast Zoning has many unique characteristics:
- This traffic allows only one broadcast zone per fabric.
- It isolates broadcast traffic.
- It is hardware-enforced.

If you ask me how to choose the zoning type then it is based on SAN requirement in your data center environment. But port zoning is more secure but you have to be sure that device is not going to change otherwise every time you change in storage allocation you have to modify your zoning.

Generally use in industry is soft zoning but as i have mentioned soft zoning has many cos. So, it is hard to say which one you should use always. So, analyze your datacenter environment and use proper zoning.

Broadcast zoning uses in large environment where are various fabric domain.

Having said that Zoning can be enforced either port number or WWN number but not both. When both port number and WWN specify a zone, it is a software-enforced zone. Hardware-enforced zoning is enforced at the Name Server level and in the ASIC. Each ASIC maintains a list of source port IDs that have permission to access any of the ports on that ASIC. Software-enforced zoning is exclusively enforced through selective information presented to end nodes through the fabric Simple Name Sever (SNS).

If you know about switch then you must notice that in Cisco we have FCNS database and Brocade Name Server. Both are for same purpose to store all the information about port and other. FCNS is stand for Fibre Channel Name Server.

There are plenty of thing on Switch itself to protect your SAN environment. Each vendor comes with different security policy. Zoning is the basic thing in order to secure your data access.

Hope this info will be useful for beginner. Please raise a comment if you want to know specific things.

Brocade Switches:
How to merge two switches with different active zone sets."

Merging Two B-series Directors and/or Switches with Different Active Zoning Configurations
Before Beginning The following procedure is disruptive to fabric traffic.:
--It will require disabling the switch and the removal of the effective zoning configurations at one step. Removing this configuration will stop the data flow. Since this step in the procedure takes only a few moments to complete, data should resume as soon as the new configuration is activated.
To evaluate the impact on an OS platforms and applications, please refer to the ESN Topology Guide for OS platform timeout recommendations as well as the actual configuration files of the servers to identify their current timeout settings.

Supported Director and Switch Types
The following information on fabric merging applies to the following EMC Director and Switch types:
ED-12000B
DS-32B2
DS-16B2
DS-16B
DS-8B
NOTE: Also applies to similar OEM version of these switch types. See ESM for latest switch firmware qualification prior to merging non-EMC Directors and/or Switches into an EMC SAN.

Host Requirements:
A host computer with a FTP service is required.

Merging

1. Log into the first switch via telnet or WebTools
a. Known as “swo1” for this example
b. For DS-16Bs, DS-8Bs, and comparable switch models running firmware 2.5.0d and above, default access zoning must be set to “ALLACCESS”
NOTE: This is an offline command that will interrupt data flow.
1. Issue switchdisable command
2. Issue configure command
3. Enter “y” when prompted for “Zoning Operation parameters”
4. Enter “1” when prompted for “Default Access”
5. Enter “n” for all other parameters
6. Issue switchenable command
2. Upload the first switch (or one switch of a multi-switch fabric) configuration to a host using FTP
a. Use configupload command or use WebTools
b. Name the file “sw01_config.txt”
1. All zoning and configuration data for this switch will be located in this file.
3. Log into the second switch via telnet or WebTools
a. Known as “sw02” for this example
b. For DS-16Bs, DS-8Bs, and comparable switch models running firmware 2.5.0d and above, default access zoning must be set to “ALLACCESS”
NOTE: This is an offline command that will interrupt data flow.
1. Issue switchdisable command
2. Issue configure command
3. Enter “y” when prompted for “Zoning Operation parameters”
4. Enter “1” when prompted for “Default Access”
5. Enter “n” for all other parameters
6. Issue switchenable command
4. Upload the switch configuration to a host using FTP
a. Use configupload command or use WebTools
b. Name the file “sw02_config.txt”
1. All zoning and configuration data for this switch will be located in this file.
5. Open in a text editor (i.e. Microsoft Word, VI, emacs, etc) for both “sw01_config.txt” and “sw02_config.txt” files
a. The uploaded configuration contains a list of switches in the fabric, list of ISLs, list of ports, name server data, and zoning information.
b. For the purposes of merging, one need only be concerned with the zoning section of the uploaded configuration, which may be found at the end of the file. It contains zones, aliases, and defined and effective configurations.

Example sw01_config.txt Zoning Section
[Zoning]
cfg.cfg_1:zone_1
zone.zone_1:10:00:00:08:00:00:00:01
alias.HBA1:10:00:00:08:00:00:00:01
enable:cfg_1
Example sw02_config.txt Zoning Section
[Zoning]
cfg.cfg_2:zone_2
zone.zone_2:10:00:00:00:09:00:00:02
alias.HBA2:10:00:00:00:09:00:00:02
enable:cfg_2


6. Make a copy of “sw01_config.txt” and rename the copy as “configmerge.txt”
7. Copy aliases from “sw02_config.txt”
a. Highlight and copy the alias data
8. Paste aliases from “sw02_config.txt” to “configmerge.txt”
a. Paste under existing alias data in “configmerge.txt”
9. Copy zones from “sw02_config.txt”
a. Highlight and copy the zone data
10. Paste zones from “sw02_config.txt” to “configmerge.txt”
a. Paste under existing zone data in “configmerge.txt”
11. Copy zone names from “cfg.cfg” line of “[Zoning]” section from “sw02_config.txt” to “configmerge.txt”
a. Copy zone name(s) to “cfg.cfg” line after existing zones separating each zone with a semicolon
b. The last zone name will not be followed by a semicolon

Example Configmerge.txt Zoning Section After Paste from sw02_config.txt
[Zoning]
cfg.cfg_1:zone_1;zone_2
zone.zone_1:10:00:00:08:00:00:00:01
zone.zone_2:10:00:00:00:09:00:00:02
alias.HBA1:10:00:00:08:00:00:00:01
alias.HBA2:10:00:00:00:09:00:00:02
enable:cfg_1


NOTE: Areas highlighted in red above illustrate the additions from “sw02_config.txt”
12. Save changes to “configmerge.txt”
13. Download “configmerge.txt” to sw01
a. Use configdownload command or use WebTools
1. If using configdownload command, the switch must be manually disabled before downloading commences. Use the switchdisable command. After completion, the switch must be manually enabled. Use the switchenable command.
2. Using WebTools automatically disables and re-enables the switch.
b. After downloading, the newly merged configuration is automatically the effective configuration because it is already specified in the “[Zoning]” section as the enabled configuration.
14. Issue cfgsave command on sw01
a. Saves the configuration to flash
15. Issue cfgshow command to see defined and effective zoning configurations
Example Output of cfgshow Command on sw01 After Configmerge.txt is Downloaded

Defined configuration:
cfg: cfg_1 zone_1; zone_2
zone: zone_1 10:00:00:08:00:00:00:01
zone: zone_2 10:00:00:00:09:00:00:02
alias: HBA1 10:00:00:08:00:00:00:01
alias: HBA2 10:00:00:00:09:00:00:02
Effective configuration:
cfg: cfg_1
zone: zone_1 Protocol:ALL 10:00:00:08:00:00:00:01
zone: zone_2 Protocol:ALL 10:00:00:00:09:00:00:02


16. On sw02, issue the following commands to remove both defined and effective zoning configurations
a. cfgdisable
b. cfgclear
c. cfgsave
17. Issue cfgshow command to see defined and effective zoning configurations
Example Output of “cfgshow” Command on Second Switch After Removing the Configuration
Defined configuration:
no configuration defined
Effective configuration:
no configuration in effect
18. Connect the switches via a fiber optic cable to the ports chosen to be E_ports.
a. sw02 will inherit the zoning data from sw01 when they exchange fabric parameters.
NOTE: Be sure to check that both switches have unique Domain IDs. Be sure to check the fabric parameters such as EDTOV, RATOV, Data Field Size, Core Switch PID are identical.
19. Issue cfgshow command on second switch to see defined and effective zoning configurations.
Example Output of cfgshow Command on sw02 After Fabric Merge

Defined configuration:
cfg: cfg_1 zone_1; zone_2
zone: zone_1 10:00:00:08:00:00:00:01
zone: zone_2 10:00:00:00:09:00:00:02
alias: HBA1 10:00:00:08:00:00:00:01
alias: HBA2 10:00:00:00:09:00:00:02
Effective configuration:
cfg: cfg_1
zone: zone_1 Protocol:ALL 10:00:00:08:00:00:00:01
zone: zone_2 Protocol:ALL 10:00:00:00:09:00:00:02


NOTE: Zoning configurations on both switches are now identical.
20. Issue switchshow and fabricshow commands to verify a successful fabric merge

Hope this info will help you to replace a switch in your enviornment or merge.

There are different type of SAN like IP SAN, NAS over SAN etc... We will discuss about Fibre Channel SAN. It gives you more option in order to manage and minimize downtime means reducing company cost.

In general if you think storage environments, physical interfaces to storage consisted of parallel SCSI channels supporting a small number of SCSI devices. With Fibre Channel, the technology provides a means to implement robust storage area networks that may consist of 100’s of devices. Fibre Channel storage area networks yield a capability that supports high bandwidth storage traffic on the order of 100 MB/s, and enhancements to the Fibre Channel standard will support even higher bandwidth in the near future.

Depending on the implementation, several different components can be used to build a Fibre Channel storage area network. The Fibre Channel SAN consists of components such as storage subsystems, storage devices, and server systems that are attached to a Fibre Channel network using Fibre Channel adapters. Fibre Channel networks in turn may be composed of many different types of interconnect entities. Examples of interconnect entities are switches, hubs, and bridges.

There are various type of SAN implementation so lets discuss little bit about physical view and logical view of SAN.

The physical view allows the physical components of a SAN to be identified and the associated
physical topology between them to be understood. Similarly, the logical view allows the relationships and associations between SAN entities to be identified and understood.

Physical View

From a physical standpoint, a SAN environment typically consists of four major classes of components. These four classes are:
· End-user platforms such as desktops and/or thin clients;
· Server systems;
· Storage devices and storage subsystems;
· Interconnect entities.
Typically, network facilities based on traditional LAN and WAN technology provide connectivity between end-user platforms and server system components. However in some cases, end-user platforms may be attached to the Fibre Channel network and may access storage devices directly. Server system components in a SAN environment can exist independently or as a cluster. As processing requirements continue to increase, computing clusters are becoming more prevalent.

We are using new term cluster. this itself is big topic to cover but we will have brief idea about cluster. A cluster is defined as a group of independent computers managed as a single system for higher availability, easier manageability, and greater scalability. Server system components are
interconnected using specialized cluster interconnects or open clustering technologies such as the Fibre Channel - Virtual Interface mapping. Storage subsystems are connected to server systems, to end–user platforms, and to each other using the facilities of a Fibre Channel network. The Fibre Channel network is made up of various interconnect entities that may include switches, hubs, and bridges.





Logical View

From a logical perspective, a SAN environment consists of SAN components and resources, as well as their relationships, dependencies and other associations. Relationships, dependencies, and associations between SAN components are not necessarily constrained by physical connectivity. For example, a SAN relationship may be established between a client and a
group of storage devices that are not physically co-located. Logical relationships play a key role in the management of SAN environments. Some key relationships in the SAN environment are identified below:


· Storage subsystems and interconnect entities;
· Between storage subsystems;
· Server systems and storage subsystems (including adapters);
· Server systems and end-user components;
· Storage and end-user components;
· Between server systems.


As a specific example, one type of relationship is the concept of a logical entity group. In this case, server system components and storage components are logically classified as connected components because they are both attached to the Fibre Channel network. A logical entity group forms a private virtual network or zone within the SAN environment with a specific set of
connected entities as members. Communication within each zone is restricted to its members.
In another example, where a Fibre Channel network is implemented using a switched fabric, the Fibre Channel network may further still be broken down into logically independent sections called sub-fabrics for each possible combination of data rate and class of service. Sub-fabrics are again divided into regions and extended-regions based on compatible service parameters.
Regions and extended regions can also be divided into partitions called zones for administrative purposes.

LUN Management

Posted by Diwakar ADD COMMENTS

LUN Basics

Simply stated, a LUN is a logical entity that converts raw physical disk space into logical storage space that a host server's operating system can access and use. Any computer user recognizes the logical drive letter that has been carved out of their disk drive. For example, a computer may boot from the C: drive and access file data from a different D: drive. LUNs do the same basic job. "LUNs differentiate between different chunks of disk space. "A LUN is part of the address of the storage that you're presenting to a [host] server."

LUNs are created as a fundamental part of the storage provisioning process using software tools that typically accompany the particular storage platform. However, there is not a 1-to-1 ratio between drives and LUNs. Numerous LUNs can easily be carved out of a single disk drive. For example, a 500 GB drive can be partitioned into one 200 GB LUN and one 300 GB LUN, which would appear as two unique drives to the host server. Conversely, storage administrators can employ Logical Volume Manager software to combine multiple LUNs into a larger volume. Veritas Volume Manager from Symantec Corp. is just one example of this software. In actual practice, disks are first gathered into a RAID group for larger capacity and redundancy (e.g., RAID-50), and then LUNs are carved from that RAID group.

LUNs are often referred to as logical "volumes," reflecting the traditional use of "drive volume letters," such as volume C: or volume F: on your computer. But some experts warn against mixing the two terms, noting that the term "volume" is often used to denote the large volume created when multiple LUNs are combined with volume manager software. In this context, a volume may actually involve numerous LUNs and can potentially confuse storage allocation. "The 'volume' is a piece of a volume group, and the volume group is composed of multiple LUNs,"
Once created, LUNs can also be shared between multiple servers. For example, a LUN might be shared between an active and standby server. If the active server fails, the standby server can immediately take over. However, it can be catastrophic for multiple servers to access the same LUN simultaneously without a means of coordinating changed blocks to ensure data integrity. Clustering software, such as a clustered volume manager, a clustered file system, a clustered application or a network file system using NFS or CIFS, is needed to coordinate data changes.

SAN zoning and masking

LUNs are the basic vehicle for delivering storage, but provisioning SAN storage isn't just a matter of creating LUNs or volumes; the SAN fabric itself must be configured so that disks and their LUNs are matched to the appropriate servers. Proper configuration helps to manage storage traffic and maintain SAN security by preventing any server from accessing any LUN.
Zoning makes it possible for devices within a Fibre Channel network to see each other. By limiting the visibility of end devices, servers (hosts) can only see and access storage devices that are placed into the same zone. In more practical terms, zoning allows certain servers to see one or more ports on a disk array. Bandwidth, and thus minimum service levels, can be reserved by dedicating certain ports to a zone or isolate incompatible ports from one another.
Consequently, zoning is an important element of SAN security and high-availability SAN design. Zoning can typically be broken down into hard and soft zoning. With hard zoning, each device is assigned to a zone, and that assignment can never change. In soft zoning, the device assignments can be changed by the network administrator.
LUN masking adds granularity to this concept. Just because you zone a server and disk together doesn't mean that the server should be able to see all of the LUNs on that disk. Once the SAN is zoned, LUNs are masked so that each host server can only see specific LUNs. For example, suppose that a disk has two LUNs, LUN_A and LUN_B. If we zoned two servers to that disk, both servers would see both LUNs. However, we can use LUN masking to allow one server to see only LUN_A and mask the other server to see only LUN_B. Port-based LUN masking is granular to the storage array port, so any disks on a given port will be accessible to any servers on that port. Server-based LUN masking is a bit more granular where a server will see only the LUNs assigned to it, regardless of the other disks or servers connected.

LUN scaling and performance
LUNs are based on disks, so LUN performance and reliability will vary for the same reasons. For example, a LUN carved from a Fibre Channel 15K rpm disk will perform far better than a LUN of the same size taken from a 7,200 rpm SATA disk. This is also true of LUNs based on RAID arrays where the mirroring of a RAID-0 group may offer significantly different performance than the parity protection of a RAID-5 or RAID-6/dual parity (DP) group. Proper RAID group configuration will have a profound impact on LUN performance.
An organization may utilize hundreds or even thousands of LUNs, so the choice of storage resources has important implications for the storage administrator. Not only is it necessary to supply an application with adequate capacity (in gigabytes), but the LUN must also be drawn from disk storage with suitable characteristics. "We go through a qualification process to understand the requirements of the application that will be using the LUNs for performance, availability and cost," For example, a LUN for a mission-critical database application might be taken from a RAID-0 group using Tier-1 storage, while a LUN slated for a virtual tape library (VTL) or archive application would probably work with a RAID-6 group using Tier-2 or Tier-3 storage.

LUN management tools
A large enterprise array may host more than 10,000 LUNs, so software tools are absolutely vital for efficient LUN creation, manipulation and reporting. Fortunately, management tools are readily available, and almost every storage vendor provides some type of management software to accompany products ranging from direct-attached storage (DAS) devices to large enterprise arrays.
Administrators can typically opt for vendor-specific or heterogeneous tools. A data center with one storage array or a single-vendor shop would probably do well with the indigenous LUN management tool that accompanied their storage system. Multivendor shops should at least consider heterogeneous tools that allow LUN management across all of the storage platforms. Mack uses EMC ControlCenter for LUN masking and mapping, which is just one of several different heterogeneous tools available in the marketplace. While good heterogeneous tools are available, he advises caution when selecting a multiplatform tool. "Sometimes, if the tool is written by a particular vendor, it will manage 'their' LUNs the best," he says. "LUNs from the other vendors can take the back seat -- the management may not be as well integrated."
In addition to vendor support, a LUN management tool should support the entire storage provisioning process. Features should include mapping to specific array ports and masking specific host bus adapters (HBA), along with comprehensive reporting. The LUN management tool should also be able to reclaim storage that is no longer needed. Although a few LUN management products support autonomous provisioning, experts see some reluctance toward automation. "It's hard to do capacity planning when you don't have any checks and balances over provisioning," Mack says, also noting that automation can circumvent strict change control processes in an IT organization.

LUNs at work

Significant storage growth means more LUNs, which must be created and managed efficiently while minimizing errors, reigning in costs and maintaining security. For Thomas Weisel Partners LLC, an investment firm based in San Francisco, storage demands have simply exploded to 80 terabytes (TB) today -- up from about 8 TB just two years ago. Storage continues to flood the organization's data center at about 2 TB to 3 TB each month depending on projects and priorities.
This aggressive growth pushed the company out of a Hitachi Data Systems (HDS) storage array and into a 3PARdata Inc. S400 system. LUN deployment starts by analyzing realistic space and performance requirements for an application. "Is it something that needs a lot of fast access, like a database or something that just needs a file share?" asks Kevin Fiore, director of engineering services at Thomas Weisel. Once requirements are evaluated, a change ticket is generated and a storage administrator provisions the resources from a RAID-5 or RAID-1 group depending on the application. Fiore emphasizes the importance of provisioning efficiency, noting that the S400's internal management tools can provision storage in just a few clicks.
Fiore also notes the importance of versatility in LUN management tools and the ability to move data. "Dynamic optimization allows me to move LUNs between disk sets," he says. Virtualization has also played an important role in LUN management. VMware has allowed Fiore to consolidate about 50 servers enterprise-wide along with the corresponding reduction in space, power and cooling. this lets the organization manage more storage with less hardware.
LUNs getting large
As organizations deal with spiraling storage volumes, experts suggest that efficiency enhancing features, such as automation, will become more important in future LUN management. Experts also note that virtualization and virtual environments will play a greater role in tomorrow's LUN management. For example, it's becoming more common to provision very large chunks of storage (500 GB to 1 TB or more) to virtual machines. "You might provision a few terabytes to a cluster of VMware servers, and then that storage will be provisioned out over time.

You have two fabrics running off of two switches. You'd like to make them one fabric. How to do that? For the most part, it's simply connecting the two switches via e_ports.

Before doing that, however, realize there's several factors that can prevent them from mergingg

  1. Incompatible operating parameters such as RA_TOV and ED_TOV
  2. Duplicate domain IDs.
  3. Incompatible zoning configurations
  4. No principal switch (priority set to 255 on all switches)
  5. No response from the switch (hello sent every 30 seconds)

To avoid the issues above:

  1. Check IPs on all Service Processors and switches; deconflict as necessary.
  2. Ensure that all switches have unique domain ids.
  3. Ensure that operating parameters are the same.
  4. Ensure there aren't any zoning conflicts in the fabric (port zones, etc).

Once that's done:

  1. Physically link the switches
  2. View the active zone set to ensure the merge happens.
  3. Save the active zone set
  4. Activate the new zone set.

About Me

My photo
Sr. Solutions Architect; Expertise: - Cloud Design & Architect - Data Center Consolidation - DC/Storage Virtualization - Technology Refresh - Data Migration - SAN Refresh - Data Center Architecture More info:- diwakar@emcstorageinfo.com
Blog Disclaimer: “The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.”
EMC Storage Product Knowledge Sharing