Showing posts with label DATA Replication. Show all posts
Showing posts with label DATA Replication. Show all posts

CLARiiON Flare release 29 (04.29.000.5.001) introduce support for several new features as follow:

1) Virtualization-aware Navisphere Manager - Discovery of VMware client always were difficult in earlier release but Flare 29 enables CLARiiON CX4 users and VMware administrators to reduce infrastructure reporting time from hours to minutes. Earlier releases have allowed only a single IP address to be assigned to each iSCSI physical port. With Flare 29, the ability to define multiple virtual iSCSI ports on each physical port has been added along with the ability to tag each virtual port with unique VLAN tag. VLAN tagging has also been added to the single Management Port interface. It should be noted that the IP address and VLAN tag assignments should be carefully coordinated with those supporting the network infrastructure where the storage system will operate

2) Built-in policy-based spin down of idle SATA II drives for CLARiiON CX4 - Lowers power requirements in environments such as test and development in physical and virtual environments. Features include a simple management via a “set it and forget it” policy, complete spin down of inactive drives during times of zero I/O activity, and drives automatically spin back up after a "first I/O" request is received.

3) Virtual Provisioning Phase 2 - Support for MirrorView and SAN Copy replication on thin LUNs has been added.

4) Search feature – Provides users with the ability to search for a wide-variety of objects across their storage systems. Objects can be either logical (e.g., LUN) or physical (e.g., disks).

5) Replication roles - Three additional roles have to be added in Navisphere: “Local Replication Only”, “Replication” and “Replication/ Recovery”.

6) Dedicated VMware software files - VMware software files (i.e. NaviSecCLI, Navisphere Initialization Wizard) are now separate from those of the Linux Operating System.

7) Software filename standardization - all CLARiiON software filenames beginning with FLARE Release 29

8) Changing SP IP addresses - SP IP addresses can now be changed without rebooting the SP. Only the Management Sever will need to be rebooted from the Setup page, which results in no storage system downtime.

9) Linux 64-bit server software – Native 64-bit Linux server software files simplify installation by eliminating the need to gather and load 32-bit DLLs.

10) Solaris x64 Navisphere Host Agent – Release 29 marks the introduction of Solaris 64-bit Navisphere Host Agent software. This Host Agent is backward compatible with older FLARE release.

To setup Replication Manager you must perform the following tasks:

1) Verify that your environment has the minimum required storage hardware and that the hardware has a standard CLARiiON configuration.
2) Confirm that your Replication Manager hosts (server and clients) are connected to the CLARiiON environment through a LAN connection.
3) Zone the fibre switch appropriately (if applicable). The clients must be able to access all storage arrays they are using and the mount hosts must be able to access all storage in the EMC Replication Storage group.
4) Install all necessary software on each Replication Manager client , server, and mount host. Also install the appropriate firmware and software on the CLARiiON array itself.
5) Modify the clarcnfg file to represent all CLARiiON Arrays.
6) On Solaris hosts, verify that there are enough entries in the sd.conf file to support all dynamic mounts of replica LUNs.
7) Install Replication Manager Client software on each client that has an application with data from which you plan to create replicas.
8) Create a new user account on the CLARiiON and give this new account privileges as an administrator. Replication Manager can use this account to access and manipulate the CLARiiON as necessary.
9) Grant storage processor privileges through the agent tab of storage processor properties to allow aviCLI.jar commands from Replication Manager Client Control Daemon (irccd) process to reach the CLARiiON storage array.
10) Update the agent.config file on each client where Replication Manager is installed to include a link to: user system@ where is the IP address of a storage processor. You should add a link to both storage processors in each StorageWorks array that you are using.
11) Verify that you have Clone Private LUNs set up on your CLARiiON storage array. --Create a mount storage group for each mount host and make sure that storage group contains at least one LUN, and that the LUN is visible to the mount host. This LUN does not have to be dedicated or remain empty; you can use it for any purpose. However if no LUNs are visible to the Replication Manager mount host, Replication Manager will not operate.
12) Create a storage group named EMC Replication Storage and populate it with free LUNs that you created in advance for Replication Manager to use for storing replicas.
13) Start the Replication Manager Console and connect to your Replication Manager server. You must perform the following steps:
a) Register all Replication Manager clients
b) Run Discover Arrays
c) Run Configure Array for each array discovered
d) Run Discover Storage for each array discovered

The global reserved LUN pool works with replication software, such as SnapView, SAN Copy, and MirrorView/A to store data or information required to complete a replication task. The reserved LUN pool consists of one or more private LUNs. The LUN becomes private when you add it to the reserved LUN pool. Since the LUNs in the reserved LUN pool are private LUNs, they cannot belong to storage groups and a server cannot perform I/O to them.

Before starting a replication task, the reserved LUN pool must contain at least one LUN for each source LUN that will participate in the task. You can add any available LUNs to the reserved LUN pool. Each storage system manages its own LUN pool space and assigns a separate reserved LUN (or multiple LUNs) to each source LUN.
All replication software that uses the reserved LUN pool shares the resources of the reserved LUN pool. For example, if you are running an incremental SAN Copy session on a LUN and a snapshot session on another LUN, the reserved LUN pool must contain at least two LUNs - one for each source LUN. If both sessions are running on the same source LUN, the sessions will share a reserved LUN.

Estimating a suitable reserved LUN pool size

Each reserved LUN can vary in size. However, using the same size for each LUN in the pool is easier to manage because the LUNs are assigned without regard to size; that is, the first available free LUN in the global reserved LUN pool is assigned. Since you cannot control which reserved LUNs are being used for a particular replication session, EMC recommends that you use a standard size for all reserved LUNs. The size of these LUNs are dependent on If you want to optimize space utilization, the recommendation would be to create many small reserved LUNs, which allows for sessions requiring minimal reserved LUN space to use one or a few reserved LUNs, and sessions requiring more reserved LUN space to use multiple reserved LUNs. On the other hand, if you want to optimize the total number of source LUNs, the recommendation would be to create many large reserved LUNs, so that even those sessions which require more reserved LUN space only consume a single reserved LUN.

The following considerations should assist in estimating a suitable reserved LUN pool size for the storage system.
If you wish to optimize space utilization , use the size of the smallest source LUN as the basis of your calculations. If you wish to optimize the total number of source LUNs , use the size of the largest source LUN as the basis of your calculations. If you have a standard online transaction processing configuration (OLTP), use reserved LUNs sized at 10-20%. This tends to be an appropriate size to accommodate the copy-on-first-write activity.
If you plan on if you plan on creating multiple sessions per source LUN, anticipate a large number of writes to the source LUN, or anticipate a long duration time for the session, you may also need to allocate additional reserved LUNs. With any of these cases, you should increase the calculation accordingly. For instance, if you plan to have 4 concurrent sessions running for a given source LUN, you might want to increase the estimated size by 4 – raising the typical size to 40-80%.

Kashya (EMC Acquired last year ) develops unique algorithmic technologies to enable an order of magnitude improvement in the reliability, cost, and performance of an enterprise’s data protection capabilities. Based on the Kashya Data Protection Appliance platform, Kashya’s powerful solutions deliver superior data protection at a fraction of the cost of existing solutions. Kashya’s Data Protection Appliance connects to the SAN and IP infrastructure and provides bi-directional replication across any distance for heterogeneous storage, SAN, and server environments.

The recent Storage industry challange is minimize downtime and how to keep business running 24 X 7 X 365. The data that drives today’s globally oriented businesses is stored on large networks of interconnected computers and data storage devices. This data must be 100% available and always accessible and up-to-date, even in the face of local or regional disasters. Moreover, these conditions must be met at a cost that is affordable, and without in any way hampering normal company operations.

To reduce the business risk of an unplanned event of this type, an enterprise must ensure that a copy of its business-critical data is stored at a secondary location. Synchronous replication, used so effectively to create perfect copies in local networks, performs poorly over longer distances.

Replication Method:

1) Synchronous – Every write transaction committed must be acknowledged from the
secondary site. This method enables efficient replication of data within the local
SAN environment.

2) Asynchronous – Every write transaction is acknowledged locally and then added to a
queue of writes waiting to be sent to the secondary site. With this method, some
data will normally be lost in the event of a disaster. This requires the same
bandwidth as a synchronous solution.

3) Snapshot –A consistent image of the storage subsystem is periodically transferred to the secondary site. Only the changes made since the previous snapshot must be transferred, resulting in significant savings in bandwidth. By definition, this solution produces a copy that is not up-to-date; however, increasing the frequency of the snapshots can reduce the extent of this lag.

4) Small-Aperture Snapshot – Kashya’s system offers the unique ability to take frequent snapshots, just seconds apart. This innovative feature is utilized to minimize the risk of data loss due to data corruption that typically follows rolling disasters.


Kashya’s advanced architecture can be summarized as follows:
 Positioning at the junction between the SAN and the IP infrastructure enables Kashya
solutions to:
 Deploy enterprise-class data protection non-disruptively and non-invasively
 Support heterogeneous server, SAN, and storage platforms
 Monitor SAN and WAN behavior on an ongoing basis, to maximize the data
protection process.
 Advanced algorithms, that:- Automatically manage the replication process, with strict adherence to userdefined policies that are tied to user-specified business objectives

About Me

My photo
Sr. Solutions Architect; Expertise: - Cloud Design & Architect - Data Center Consolidation - DC/Storage Virtualization - Technology Refresh - Data Migration - SAN Refresh - Data Center Architecture More info:- diwakar@emcstorageinfo.com
Blog Disclaimer: “The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.”
EMC Storage Product Knowledge Sharing