Showing posts with label SAN. Show all posts
Showing posts with label SAN. Show all posts

A multipath requirement for different storage arrays:-
All storage arrays: - Write cache must be disabled if not battery backed.
Topology: - No single failure should cause both HBA and SP failover, especially with active-passive storage arrays.

IBM TotalStorage DS 400 Family (formely FastT) –

Defaul host type must be LNXCL or Host Type must be LNXCL or
AVT (Auto Volume Transfer) is disabled in this host mode.

HDS 99xx and 95xx family – HDS 9500V family (Thunder)- Requires two host modes:
Host mode 1 – standard
Host mode 2 – Sun Cluster
HDS 99xx family Lightning and HDS Tamba USP requires host mode set to Netware.

EMC Symmetrix :- Enable the SPC2 and SC3 settings.

EMC CLARiiON – All initiator records must have

- Fail-over Mode = 1
- Initiator Type = “CLARiiON Open”
- Array CommPath = “Enabled” or 1

HP EVA :- For EVA3000/5000 firmware 4.001 and above and EVA 4000/6000/8000 firmware 5.031 and above, set the host type to VMWare. Otherwise, set the host mode type to custom. The value is :
EVA3000/5000 firmware 3.x: 000000002200282E
EVA4000/6000/8000: 000000202200083E

HP XP:- For XP 128/1024/10000/12000, the host mode should be set to 0C (Windows), that is, zeroC (Windows).

NetApp :- No specific requirements

ESX Server Configuration :- Set the following Advanced Settings for the ESX Server host:-

Set Disk.UseLunReset to 1
Set Disk.UseDeviceReset to 0
A multipathing policy of Most Recently Used must be set for all LUNs hosting clustered disks for active-passive arrays. A multipathing policy of Most Recently Used or Fixed may be set for LUNs on active-active arrays. All FC HBAs must be of the same model.

The following major configuration steps for the storage, server and switches necessary for implementing the CLARiiON.
  1. Install Fibre Channel HBAs in all systems
  2. Install EMC CLARiiON LP8000 port driver ( For Emulex) on all system
  3. Connect each host to both switches ( Broace/Cisco/McData)
  4. Connect SP1-A and SP2-A to the first switch
  5. Connect SP1-B and SP2-B to the second switch
  6. Note:- You can use cross SP connection for HA and connect SPA1 and SPB1 to first switch and SPB2 and SPA2 to the second switch.
  7. Install Operating System on windows/solaris/linux/Vmware hosts
  8. Connect all hosts to the Ethernet LAN
  9. Install EMC CLARiiON Agent Configurator/Navisphere Agent on all hosts
  10. Install EMC CLARiiON ATF software on all hosts if you are not using EMC powerpath fail-over software otherwise install supported version EMC Powerpath on all hosts.
  11. Install the Navisphere Manager on one of the NT hosts
  12. Configure Storage Groups using the Navisphere Manager
  13. Assign Storage groups to hosts as dedicated storage/Cluster/Shared Storage
  14. Install cluster software on host.
  15. Test the cluster for node failover
  16. Create Raid Group with protection as application required(raid5,raid1/0 etc)
  17. Bind LUN according to application device layout requirement.
  18. Add LUN to storage Group.
  19. Zone SP port and Host HBA on both switch
  20. Register Host on CLARiiON using Navisphere Manager.
  21. Add all hosts to storage group.
  22. Scan the devices on host.
  23. Label and Format the device on host.

Single-initiator zoning rule :--Each HBA port must be in a separate zone that contains it and the SP ports with which it communicates. EMC recommends single-initiator zoning as a best practice.
Fibre Channel fan-in rule:--A server can be zoned to a maximum of 4 storage systems.
Fibre Channel fan-out rule:-The Navisphere software license determines the number of servers you can connect to a CX3-10c, CX3-20c,CX3-20f, CX3-40c, CX3-40f, or CX3-80 storage systems. The maximum number of connections between servers and a storage system is defined by the number of initiators supported per storage-system SP. An initiator is an HBA
port in a that can access a storage system. Note that some HBAs have multiple ports. Each HBA port that is zoned to an SP port is one path to that SP and the storage system containing that SP. Depending on the type of storage system and the connections between its SPs and the switches, an HBA port can be zoned through different switch ports to the same SP port or to different SP ports, resulting in multiple paths between the HBA port and an SP and/or the storage system. Note that the failover software running on the server may limit the number of paths supported from the server to a single storage-system SP and from a server to the storage system.

Storage systems with Fibre Channel data ports :- CX3-10c (SP ports 2 and 3), CX3-20c (SP ports 4 and 5), CX3-20f (all ports), CX3-40c (SP ports 4 and 5), CX3-40f (all ports), CX3-80 (all ports).
Number of servers and storage systems As many as the available switch ports, provided each server follows the fan-in rule above and each storage system follows the fan-out rule above, using WWPN switch zoning.
Fibre connectivity
Fibre Channel Switched Fabric (FC-SW) connection to all server types.
Fibre Channel switch terminologySupported Fibre Channel switches.
Fabric - One or more switches connected by E_Ports. E_Ports are switch ports that are used only for connecting switches together.
ISL - (Inter Switch Link). A link that connects two E_Ports on two different switches.
Path - A path is a connection between an initiator (such as an HBA port) and a target (such as an SP port in a storage system). Each HBA port that is zoned to a port on an SP is one path to that SP and the storage system containing that SP. Depending on the type of storage system and the connections between its SPs and the switch fabric, an HBA port can be zoned through different switch ports to the same SP port or to different SP ports, resulting in multiple paths between the HBA port and an SP and/or the storage system. Note that the failover software running on the server may limit the number of paths supported from the server to a single storage-system
SP and from a server to the storage system.


I have been receiving mail to write on basic storage topic rather than only EMC. Here is first basic thing to know about FC technology.

Fibare Channel is nothing but just a medium to connect host and shared storage. When we talk about SAN first things comes in mind about Fibre Channel.

Fibre Channel is serial data transfer interface intended for connecting shared storage to computer. Where storage is not connected physically to host.

Why FC is most important in SAN? Because FC gives you high speed through the following process:

1) Networking and I/O Protocol such as SCSI command, are mapped to FC construct
2) Encapsulate and transported with FC frame.
3) With this, the hight speed transfer of multiple protocol is possible over same physical interface.

FC operate over copper wire or optical fibre at the rate upto 4GB/s and upto 10GB/s when used as ISL (E - Port) on supported switch.
At the same time, latency is kept very low, minimizing the delay between data requests and deliveries. For example, the latency across a typical FC switch is only a few microseconds. It is this combination of high speed and low latency that makes FC an ideal choice for time-sensitive or transactional processing environments.

These attributes also support high scalability, allowing more storage systems and servers to be interconnected.Fibre Channel is also supports a variety of topologies, and is able to operate between two devices in a simple point-to-point mode, in an economical arbitrated loop to connect up to 126 devices, or (most commonly) in a powerful switched fabric providing simultaneous full-speed connections for many thousands of devices. Topologies and cable types can easily be mixed in the same SAN.

FC is the most important in building SAN, it gives us flexibility to use protocol like FCP, FICON, IP (iSCSI, FCIP, iFCP) and uses block type data transfer.

if we want to define what is FC - Fibre Channel is a storage area networking technology designed to interconnect hosts and shared storage systems within the enterprise. It's a high-performance, high-cost technology. iSCSI is an IP-based storage networking standard that has been touted for the wide range of choices it offers in both performance and price.

Fibre Channel technology is a block-based networking approach based on ANSI standard X3.230-1994 (ISO 14165-1). It specifies the interconnections and signaling needed to establish a network "fabric" between servers, switches and storage subsystems such as disk arrays or tape libraries. FC can carry virtually any kind of traffic.

However, there are some recognized disadvantages to FC. Fibre Channel has been widely criticized for its expense and complexity. A specialized HBA card is needed for each server. Each HBA must then connect to corresponding port on a Fibre Channel Switch. creating the SAN "fabric." Every combination of HBA and switch port can cost thousands of dollars for the storage organization. This is the primary reason why many organizations connect only large, high-end storage systems to their SAN. Once LUNs are created in storage, they must be zoned and masked to ensure that they are only accessible to the proper servers or applications; often an onerous and error-prone procedure. These processes add complexity and costly management overhead to Fibre Channel SANs.

There are different type of SAN like IP SAN, NAS over SAN etc... We will discuss about Fibre Channel SAN. It gives you more option in order to manage and minimize downtime means reducing company cost.

In general if you think storage environments, physical interfaces to storage consisted of parallel SCSI channels supporting a small number of SCSI devices. With Fibre Channel, the technology provides a means to implement robust storage area networks that may consist of 100’s of devices. Fibre Channel storage area networks yield a capability that supports high bandwidth storage traffic on the order of 100 MB/s, and enhancements to the Fibre Channel standard will support even higher bandwidth in the near future.

Depending on the implementation, several different components can be used to build a Fibre Channel storage area network. The Fibre Channel SAN consists of components such as storage subsystems, storage devices, and server systems that are attached to a Fibre Channel network using Fibre Channel adapters. Fibre Channel networks in turn may be composed of many different types of interconnect entities. Examples of interconnect entities are switches, hubs, and bridges.

There are various type of SAN implementation so lets discuss little bit about physical view and logical view of SAN.

The physical view allows the physical components of a SAN to be identified and the associated
physical topology between them to be understood. Similarly, the logical view allows the relationships and associations between SAN entities to be identified and understood.

Physical View

From a physical standpoint, a SAN environment typically consists of four major classes of components. These four classes are:
· End-user platforms such as desktops and/or thin clients;
· Server systems;
· Storage devices and storage subsystems;
· Interconnect entities.
Typically, network facilities based on traditional LAN and WAN technology provide connectivity between end-user platforms and server system components. However in some cases, end-user platforms may be attached to the Fibre Channel network and may access storage devices directly. Server system components in a SAN environment can exist independently or as a cluster. As processing requirements continue to increase, computing clusters are becoming more prevalent.

We are using new term cluster. this itself is big topic to cover but we will have brief idea about cluster. A cluster is defined as a group of independent computers managed as a single system for higher availability, easier manageability, and greater scalability. Server system components are
interconnected using specialized cluster interconnects or open clustering technologies such as the Fibre Channel - Virtual Interface mapping. Storage subsystems are connected to server systems, to end–user platforms, and to each other using the facilities of a Fibre Channel network. The Fibre Channel network is made up of various interconnect entities that may include switches, hubs, and bridges.





Logical View

From a logical perspective, a SAN environment consists of SAN components and resources, as well as their relationships, dependencies and other associations. Relationships, dependencies, and associations between SAN components are not necessarily constrained by physical connectivity. For example, a SAN relationship may be established between a client and a
group of storage devices that are not physically co-located. Logical relationships play a key role in the management of SAN environments. Some key relationships in the SAN environment are identified below:


· Storage subsystems and interconnect entities;
· Between storage subsystems;
· Server systems and storage subsystems (including adapters);
· Server systems and end-user components;
· Storage and end-user components;
· Between server systems.


As a specific example, one type of relationship is the concept of a logical entity group. In this case, server system components and storage components are logically classified as connected components because they are both attached to the Fibre Channel network. A logical entity group forms a private virtual network or zone within the SAN environment with a specific set of
connected entities as members. Communication within each zone is restricted to its members.
In another example, where a Fibre Channel network is implemented using a switched fabric, the Fibre Channel network may further still be broken down into logically independent sections called sub-fabrics for each possible combination of data rate and class of service. Sub-fabrics are again divided into regions and extended-regions based on compatible service parameters.
Regions and extended regions can also be divided into partitions called zones for administrative purposes.

SUN/SOLARIS
___________
To display what HBA's are installed.

#prtdiag -v
#dmesg
#cat /var/adm/messages grep -i wwn more


To set the configuration you must carry out the following:

-changes to the /etc/system file
-HBA driver modifications
-Persistent binding (HBA and SD driver config file)
-EMC recommended changes
-Install the Sun StorEdge SAN Foundation package


Changes to /etc/system: ( Plz ignore 3 equal sign when u edit the file)

SCSI throttle === set sd:sd_max_throttle=20
Enable wide SCSI === set scsi_options=0x7F8
SCSI I/O timeout value === sd:sd_io_time=0x3c (with powerpath)
sd:sd_io_time=0x78 (without powerpath

Changes to HBA driver (/kernel/drv/lpfc.conf):

fcp-bind-WWNN=16
automap=2
fcp-on=1
lun-queue-depth=20
tgt-queue-depth=512
no-device-delay=1 (without PP/DMP) 0 (with PP/DMP)
xmt-que-size=256
scan-down=0
linkdown-tmo=0 (without PP/DMP) 60 (with PP/DMP)

Persistent Binding
Both the lpfc.conf and sd.conf files need to be updated. General format is
name="sd" parent="lpfc" target="X" lun="Y" hba="lpfcZ"
X is the target number that corresponds to the fcp_bindWWNID lpfcZtXY is the LUN number that corresponds to symmetrix volume mapping on the symmetrix port WWN or HLU on the clariionZ is the lpfc drive instance number that corresponds to the fcp_bind_WWID lpfcZtX

To discover the SAN devices
#disk;devlinks;devalias (solaris 2.6)
#devfsadm (solaris 2.8)
#/usr/sbin/update_drv -f sd (solaris 2.9 >)

Windows
To display what HBA's are installed. use admin tool "device manager"
To set the configuration you must carry out the following:
#Registry edits
#EMC recommended changes
#Install emulex exlcfg utility

Arbitrated loop without powerpath/ATF:-

InitLinkFlags=0x00000000 (arbitrated loop, auto-link speed)
WaitReady=45
LinkDown=45
TranslateQueueFull=1

Arbitrated loop with powerpath/ATF:

InitLinkFlags=0x00000000 (arbitrated loop, auto-link speed)
WaitReady=10
LinkDown=10

Fabric without powerpath/ATF:

InitLinkFlags=0x00000002 (fabric, auto-link speed
WaitReady=45
LinkDown=45
TranslateQueueFull=1

Fabric with powerpath/ATF:

InitLinkFlags=0x00000002 (fabric, auto-link speed)
WaitReady=10
LinkDown=10

Modifying the EMC environment :
In the shortcut for the elxcfg add the "--emc" option to the target option.

To discover the SAN devices
control panel -> admin tools -> computer management -> select disk management -> (top menu)action -> rescan tools

HP

To display what HBA's are installed.

#/opt/fcms/bin/fcmsutil /dev/td# (A5158A HBA)
#/opt/fc/bin/fcutil /dev/fcs# (A6685A HBA)

On a HP system there is no additional software to install. The HP systems Volume address setting must be enabledon the SAN, you can check this will the following command.

#symcfg -sid -FA all list (confirm that the volume set addressing is set to yes)

To discover the SAN devices:
#ioscan -fnC disk (scans hardware busses for devices according to class)
#insf -e (install special device files)

AIX

To display what HBA's are installed
#lscfg
#lscfg -v -l fcs*


To set the configuration you must carry out the following:
#List HBA WWN and entry on system
#Determine code level of OS and HBA
#Download and install EMC ODM support fileset

#run /usr/lpp/Symmetrix/bin/emc_cfgmgr (symmetrix) #or /usr/lpp/emc/CLARiiON/bin/emc_cfgmgr (clariion)

To discover the SAN devices
#/usr/lpp/EMC/Symmetrix/bin/emc_cfgmgr -v
if the above does not work reboot server

Hope this will be documented and useful info for novice user.

LUN Management

Posted by Diwakar ADD COMMENTS

LUN Basics

Simply stated, a LUN is a logical entity that converts raw physical disk space into logical storage space that a host server's operating system can access and use. Any computer user recognizes the logical drive letter that has been carved out of their disk drive. For example, a computer may boot from the C: drive and access file data from a different D: drive. LUNs do the same basic job. "LUNs differentiate between different chunks of disk space. "A LUN is part of the address of the storage that you're presenting to a [host] server."

LUNs are created as a fundamental part of the storage provisioning process using software tools that typically accompany the particular storage platform. However, there is not a 1-to-1 ratio between drives and LUNs. Numerous LUNs can easily be carved out of a single disk drive. For example, a 500 GB drive can be partitioned into one 200 GB LUN and one 300 GB LUN, which would appear as two unique drives to the host server. Conversely, storage administrators can employ Logical Volume Manager software to combine multiple LUNs into a larger volume. Veritas Volume Manager from Symantec Corp. is just one example of this software. In actual practice, disks are first gathered into a RAID group for larger capacity and redundancy (e.g., RAID-50), and then LUNs are carved from that RAID group.

LUNs are often referred to as logical "volumes," reflecting the traditional use of "drive volume letters," such as volume C: or volume F: on your computer. But some experts warn against mixing the two terms, noting that the term "volume" is often used to denote the large volume created when multiple LUNs are combined with volume manager software. In this context, a volume may actually involve numerous LUNs and can potentially confuse storage allocation. "The 'volume' is a piece of a volume group, and the volume group is composed of multiple LUNs,"
Once created, LUNs can also be shared between multiple servers. For example, a LUN might be shared between an active and standby server. If the active server fails, the standby server can immediately take over. However, it can be catastrophic for multiple servers to access the same LUN simultaneously without a means of coordinating changed blocks to ensure data integrity. Clustering software, such as a clustered volume manager, a clustered file system, a clustered application or a network file system using NFS or CIFS, is needed to coordinate data changes.

SAN zoning and masking

LUNs are the basic vehicle for delivering storage, but provisioning SAN storage isn't just a matter of creating LUNs or volumes; the SAN fabric itself must be configured so that disks and their LUNs are matched to the appropriate servers. Proper configuration helps to manage storage traffic and maintain SAN security by preventing any server from accessing any LUN.
Zoning makes it possible for devices within a Fibre Channel network to see each other. By limiting the visibility of end devices, servers (hosts) can only see and access storage devices that are placed into the same zone. In more practical terms, zoning allows certain servers to see one or more ports on a disk array. Bandwidth, and thus minimum service levels, can be reserved by dedicating certain ports to a zone or isolate incompatible ports from one another.
Consequently, zoning is an important element of SAN security and high-availability SAN design. Zoning can typically be broken down into hard and soft zoning. With hard zoning, each device is assigned to a zone, and that assignment can never change. In soft zoning, the device assignments can be changed by the network administrator.
LUN masking adds granularity to this concept. Just because you zone a server and disk together doesn't mean that the server should be able to see all of the LUNs on that disk. Once the SAN is zoned, LUNs are masked so that each host server can only see specific LUNs. For example, suppose that a disk has two LUNs, LUN_A and LUN_B. If we zoned two servers to that disk, both servers would see both LUNs. However, we can use LUN masking to allow one server to see only LUN_A and mask the other server to see only LUN_B. Port-based LUN masking is granular to the storage array port, so any disks on a given port will be accessible to any servers on that port. Server-based LUN masking is a bit more granular where a server will see only the LUNs assigned to it, regardless of the other disks or servers connected.

LUN scaling and performance
LUNs are based on disks, so LUN performance and reliability will vary for the same reasons. For example, a LUN carved from a Fibre Channel 15K rpm disk will perform far better than a LUN of the same size taken from a 7,200 rpm SATA disk. This is also true of LUNs based on RAID arrays where the mirroring of a RAID-0 group may offer significantly different performance than the parity protection of a RAID-5 or RAID-6/dual parity (DP) group. Proper RAID group configuration will have a profound impact on LUN performance.
An organization may utilize hundreds or even thousands of LUNs, so the choice of storage resources has important implications for the storage administrator. Not only is it necessary to supply an application with adequate capacity (in gigabytes), but the LUN must also be drawn from disk storage with suitable characteristics. "We go through a qualification process to understand the requirements of the application that will be using the LUNs for performance, availability and cost," For example, a LUN for a mission-critical database application might be taken from a RAID-0 group using Tier-1 storage, while a LUN slated for a virtual tape library (VTL) or archive application would probably work with a RAID-6 group using Tier-2 or Tier-3 storage.

LUN management tools
A large enterprise array may host more than 10,000 LUNs, so software tools are absolutely vital for efficient LUN creation, manipulation and reporting. Fortunately, management tools are readily available, and almost every storage vendor provides some type of management software to accompany products ranging from direct-attached storage (DAS) devices to large enterprise arrays.
Administrators can typically opt for vendor-specific or heterogeneous tools. A data center with one storage array or a single-vendor shop would probably do well with the indigenous LUN management tool that accompanied their storage system. Multivendor shops should at least consider heterogeneous tools that allow LUN management across all of the storage platforms. Mack uses EMC ControlCenter for LUN masking and mapping, which is just one of several different heterogeneous tools available in the marketplace. While good heterogeneous tools are available, he advises caution when selecting a multiplatform tool. "Sometimes, if the tool is written by a particular vendor, it will manage 'their' LUNs the best," he says. "LUNs from the other vendors can take the back seat -- the management may not be as well integrated."
In addition to vendor support, a LUN management tool should support the entire storage provisioning process. Features should include mapping to specific array ports and masking specific host bus adapters (HBA), along with comprehensive reporting. The LUN management tool should also be able to reclaim storage that is no longer needed. Although a few LUN management products support autonomous provisioning, experts see some reluctance toward automation. "It's hard to do capacity planning when you don't have any checks and balances over provisioning," Mack says, also noting that automation can circumvent strict change control processes in an IT organization.

LUNs at work

Significant storage growth means more LUNs, which must be created and managed efficiently while minimizing errors, reigning in costs and maintaining security. For Thomas Weisel Partners LLC, an investment firm based in San Francisco, storage demands have simply exploded to 80 terabytes (TB) today -- up from about 8 TB just two years ago. Storage continues to flood the organization's data center at about 2 TB to 3 TB each month depending on projects and priorities.
This aggressive growth pushed the company out of a Hitachi Data Systems (HDS) storage array and into a 3PARdata Inc. S400 system. LUN deployment starts by analyzing realistic space and performance requirements for an application. "Is it something that needs a lot of fast access, like a database or something that just needs a file share?" asks Kevin Fiore, director of engineering services at Thomas Weisel. Once requirements are evaluated, a change ticket is generated and a storage administrator provisions the resources from a RAID-5 or RAID-1 group depending on the application. Fiore emphasizes the importance of provisioning efficiency, noting that the S400's internal management tools can provision storage in just a few clicks.
Fiore also notes the importance of versatility in LUN management tools and the ability to move data. "Dynamic optimization allows me to move LUNs between disk sets," he says. Virtualization has also played an important role in LUN management. VMware has allowed Fiore to consolidate about 50 servers enterprise-wide along with the corresponding reduction in space, power and cooling. this lets the organization manage more storage with less hardware.
LUNs getting large
As organizations deal with spiraling storage volumes, experts suggest that efficiency enhancing features, such as automation, will become more important in future LUN management. Experts also note that virtualization and virtual environments will play a greater role in tomorrow's LUN management. For example, it's becoming more common to provision very large chunks of storage (500 GB to 1 TB or more) to virtual machines. "You might provision a few terabytes to a cluster of VMware servers, and then that storage will be provisioned out over time.

EMC recommends no more than four connectrix switches per fabric based on the following formulae:

One Switch

-32 Total ports
- 4 ports reserved for card failure
28 ports remaining.
- (int(28/5)) No more than 4:1 ratio, hosts : fa
23 Possible host connections
-2 to support multi-pathing
-11 total host connections


Two Switch

- 64 Total ports
- 4 ports reserved for card failure
- 4 ports reserved for E_ports
- 56 ports remaining.
-(int(56/5)) No more than 4:1 ratio, hosts : fa
-45 Possible host connections
-/ 2 to support multi-pathing
22 host connections (gain of 11)

Three switches

- 96 total ports
- 4 ports reserved for card failure
- 12 ports reserved for E_ports
-80 ports remaining
- (int(80/5)) No more than 4:1 ratio, hosts : fa
-64 Possible host connections
-/ 2 to support multi-pathing
-32 host connections (gain of 10)

Four switches
-128 total ports
- 4 ports reserved for card failure
- 24 ports reserved for E_ports
-100 ports remaining
- (int(100/5)) No more than 4:1 ratio, hosts : fa
-80 Possible host connections
- / 2 to support multi-pathing
-40 host connections (gain of 8)


Putting in that fourth connectrix means that you gain only 8 host connections from a 32 port connectrix switch.

Kashya (EMC Acquired last year ) develops unique algorithmic technologies to enable an order of magnitude improvement in the reliability, cost, and performance of an enterprise’s data protection capabilities. Based on the Kashya Data Protection Appliance platform, Kashya’s powerful solutions deliver superior data protection at a fraction of the cost of existing solutions. Kashya’s Data Protection Appliance connects to the SAN and IP infrastructure and provides bi-directional replication across any distance for heterogeneous storage, SAN, and server environments.

The recent Storage industry challange is minimize downtime and how to keep business running 24 X 7 X 365. The data that drives today’s globally oriented businesses is stored on large networks of interconnected computers and data storage devices. This data must be 100% available and always accessible and up-to-date, even in the face of local or regional disasters. Moreover, these conditions must be met at a cost that is affordable, and without in any way hampering normal company operations.

To reduce the business risk of an unplanned event of this type, an enterprise must ensure that a copy of its business-critical data is stored at a secondary location. Synchronous replication, used so effectively to create perfect copies in local networks, performs poorly over longer distances.

Replication Method:

1) Synchronous – Every write transaction committed must be acknowledged from the
secondary site. This method enables efficient replication of data within the local
SAN environment.

2) Asynchronous – Every write transaction is acknowledged locally and then added to a
queue of writes waiting to be sent to the secondary site. With this method, some
data will normally be lost in the event of a disaster. This requires the same
bandwidth as a synchronous solution.

3) Snapshot –A consistent image of the storage subsystem is periodically transferred to the secondary site. Only the changes made since the previous snapshot must be transferred, resulting in significant savings in bandwidth. By definition, this solution produces a copy that is not up-to-date; however, increasing the frequency of the snapshots can reduce the extent of this lag.

4) Small-Aperture Snapshot – Kashya’s system offers the unique ability to take frequent snapshots, just seconds apart. This innovative feature is utilized to minimize the risk of data loss due to data corruption that typically follows rolling disasters.


Kashya’s advanced architecture can be summarized as follows:
 Positioning at the junction between the SAN and the IP infrastructure enables Kashya
solutions to:
 Deploy enterprise-class data protection non-disruptively and non-invasively
 Support heterogeneous server, SAN, and storage platforms
 Monitor SAN and WAN behavior on an ongoing basis, to maximize the data
protection process.
 Advanced algorithms, that:- Automatically manage the replication process, with strict adherence to userdefined policies that are tied to user-specified business objectives

About Me

My photo
Sr. Solutions Architect; Expertise: - Cloud Design & Architect - Data Center Consolidation - DC/Storage Virtualization - Technology Refresh - Data Migration - SAN Refresh - Data Center Architecture More info:- diwakar@emcstorageinfo.com
Blog Disclaimer: “The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.”
EMC Storage Product Knowledge Sharing