TimeFinder/Clone creates full-volume copies of production data, allowing you to run simultaneous tasks in parallel on Symmetrix systems. In addition to real-time, nondisruptive backup and restore, TimeFinder/Clone is used to compress the cycle time for such processes as application testing, software development, and loading or updating a data warehouse. This significantly increases efficiency and productivity while maintaining continuous support for the production needs of the enterprise.
¨ Ability to protect Clone BCVs with RAID-5
¨ Create instant Mainframe SNAPs of datasets or logical volumes
for OS/390 data, compatible with STK SNAPSHOT for RVA.
¨ Facilitate more rapid testing of new versions of operating systems, data base managers, file systems etc., as well as new applications Load or update data warehouses as needed
¨ Allow pro-active database validation, thus minimizing exposure to faulty applications.
¨ Allows multiple copies to be retained at different checkpoints for lowered RPO and RTO, thus improving service levels.
¨ Can be applied to data volumes across multiple Symmetrix devices using EMC’s unique consistency technology (TimeFinder/Consistency Group option required).
What is “Tier 0” in Storage Environments?
Tier "0" is not new in storage market but for implementation purposes it has been difficult to accommodate because it requires best performance and lowest latency. Enterprise Flash disks (Solid State Disks) capable to meet this requirement. It is possible to get more performance for company most critical applications. The performance can be gained through using Flash drives supported in VMAX and DMX-4 systems. Read More →
AX150 :-Dual storage processor enclosure with Fibre-Channel interface to host and SATA-2 disks. AX150i :-Dual storage processor enclosure with iSCSI interface to host and SATA-2 disks. AX100 :- Dual storage processor enclosure with Fibre-Channel interface to host and SATA-1 disks.
AX100SC
Single storage processor enclosure with Fibre-Channel interface to host and SATA-1 disks.
AX100i
Dual storage processor enclosure with iSCSI interface to host and SATA-1 disks.
AX100SCi
Single storage processor enclosure with iSCSI interface to host and SATA-1 disks.
CX3-80
SPE2 - Dual storage processor (SP) enclosure with four Fibre-Channel front-end ports and four back-end ports per SP.
CX3-40
SP3 - Dual storage processor (SP) enclosure with two Fibre Channel front-end ports and two back-end ports per SP.
CX3-40f
SP3 - Dual storage processor (SP) enclosure with four Fibre Channel front -end ports and four back-end ports per SP
CX3-40c
SP3 - Dual storage processor (SP) enclosure with four iSCSI front-end ports, two Fibre Channel front -end ports, and two back-end ports per SP.
CX3-20
SP3 - Dual storage processor (SP) enclosure with two Fibre Channel front-end ports and a single back-end port per SP.
CX3-20f
SP3 - Dual storage processor (SP) enclosure with six Fibre Channel front-end ports, and a single back-end port per SP.
CX3-20c
SP3 - Dual storage processor (SP) enclosure with four iSCSI front-end ports, two Fibre Channel front-end ports, and a single back-end port per SP.
CX600, CX700
SPE - based storage system with model CX600/CX700 SP, Fibre-Channel interface to host, and Fibre Channel disks
CX500, CX400, CX300, CX200
DPE2 - based storage system with model CX500/CX400/CX300/CX200 SP, Fibre-Channel interface to host, and Fibre Channel disks.
CX2000LC
DPE2- based storage system with one model CX200 SP, one power supply (no SPS),Fibre-Channel interface to host, and Fibre Channel disks.
C1000 Series
10-slot storage system with SCSI interface to host and SCSI disks
C1900 Series
Rugged 10-slot storage system with SCSI interface to host and SCSI disks
C2x00 Series
20-slot storage system with SCSI interface to host and SCSI disks
C3x00 Series
30-slot storage system with SCSI or Fibre Channel interface to host and SCSI disks
FC50xx Series
DAE with Fibre Channel interface to host and Fibre Channel disks
FC5000 Series
JBOD with Fibre Channel interface to host and Fibre Channel disks
FC5200/5300 Series
iDAE -based storage system with model 5200 SP, Fibre Channel interface to host, and Fibre channel disks
FC5400/5500 Series
DPE -based storage system with model 5400 SP, Fibre Channel interface to host, and Fibre channel disks
FC5600/5700 Series
DPE -based storage system with model 5600 SP, Fibre Channel interface to host, and Fibre Channel disks
FC4300/4500 Series
DPE -based storage system with either model 4300 SP or model 4500 SP, Fibre Channel interface to host, and Fibre Channel disks
FC4700 Series
DPE -based storage system with model 4700 SP, Fibre Channel interface to host, and Fibre Channel disks
IP4700 Series
Rackmount Network-Attached storage system with 4 Fibre Channel host ports and Fibre Channel disks.
RAID 6 Protection
RAID 6 provides this extra level of protection while keeping the same dollar cost per megabyte of usable storage as RAID 5 configurations. Although two parity drives are required for RAID 6, the same ratio of data to parity drives is consistent. For example, a RAID 6 6+2 configuration consists of six data segments and two parity segments. This is equivalent to two sets of a RAID 5 3+1 configuration, which is three data segments and one parity segment, so 6+2 = 2(3+1).
Active-active (for example, Symmetrix arrays)
In an active-active storage system, if there are multiple interfaces to a logical device, they all provide equal access to the logical device. Active-active means that all interfaces to a device are active simultaneously.
Active-passive (for example, CLARiiON arrays)
Active-passive means that only one interface to a device is active at a time, and any others are passive with respect to that device and waiting to take over if needed.
In an active-passive storage system, if there are multiple interfaces to a logical device, one of them is designated as the primary route to the device (that is, the device is assigned to that interface card). Typically, assigned devices are distributed equally among interface cards. I/O is not directed to paths connected to a non-assigned interface. Normal access to a device through any interface card other than its assigned one is either impossible (for example, on CLARiiON arrays) or possible, but much slower than access through
the assigned interface card.In the event of a failure, logical devices must be moved to another interface. If an interface card fails, logical devices are reassigned from the broken interface to another interface. This reassignment is initiated by the other, functioning interface. If all paths from a host to an interface fail, logical devices accessed on those paths are reassigned to another interface with which the host can still communicate. EitherApplication-Transparent Failover (ATF) or PowerPath, which instructs the storage system to make the reassignment, initiates this reassignment. These reassignments are known as trespassing. Trespassing can take several seconds to complete. However, I/Os do not fail during it. After devices are trespassed, ATF or PowerPath detects the changes and seamlessly routes data via the new route. After a trespass, logical devices can be trespassed back to their assigned interface. This occurs automatically if PowerPath's periodic autorestore feature is enabled. It occurs manually if powermt restore is run, which is the faster approach. Or if ATF is in use, a manual restore of the path can be executed to restore the original path.
We have discussed regarding Symmetric DMX-3. Lets talk about Symmetrix Device. DMX-3 system applies a high degree of virtualization between what host sees and the actual disk drives. This device has logical volume address that the host can address. Let me clear that “A symmetrix Device is not a physical disk.” Before actually hosts see the symmetrix device, you need to define path means mapping the devices to Front-end director and then you need to set FA-PORT attribute for specific Host. Let not discuss configuration details now. I am trying to explain what Symmetrix device is if this is not physical disk and how it will be created.
You can create up to four mirrors for each Symmetrix device. The Mirror positions are designed M1, M2, M3 and M4. When we create a device and specify its configuration type, the Symmetrix system maps the device to one or more complete disks or part of disks known as Hyper Volumes/Hypers. As a rule, a device maps to at least two mirror means hypers on two different disks, to maintain multiple copies of data.
Most of user asked me what are basic differences between EMC Clone/Mirror/Snapshot? This is really confusing terminology because most of things will be same logically.Only thing change that is implementation and purpose of uses. I am trying to write basic and common differences:
1) A clone is a full copy of data in a source LUN. A snapshot is a point-in time "virtual" copy that does not occupy any disk space.
2) A snapshot can be created or destroyed in seconds, unlike a clone or mirror. A clone, for example, can take minutes to hours to create depending on the size of the source LUN.
3) A clone or mirror requires exactly the same amount of disk space as the source LUN. A snapshot cache LUN generally requires approximately 10% to 20% of the source LUN size.
4) A clone is an excellent on-array solution that enables you to recover from a data corruption issue. Mirrors are designed for off-site data recovery.
5) A clone is typically fractured after it is synchronized while a mirror is not fractured but instead is actively and continuously being synchronized to any changes on the source LUN.
Clones and mirrors are inaccessible to the host until they are fractured. Clones
can be easily resynchronized in either direction. This capability is not easily implemented with mirrors.
Restoring data after a source LUN failure is instantaneous using clones after a reverse synchronization is initialized. Restore time from a snapshot depends on the time it takes to restore from the network or from a backup tape.
Once a clone is fractured, there is no performance impact (that is, performance is comparable to the performance experienced with a conventional LUN). For snapshots, the performance impact is above average and constant due to copy on first write (COFW).
I left one more term EMC BCV(Business Continuity Volume). It is totally different concept thought. I will try to cover in upcoming post though I have discussed about EMC BCV in my older post. But it is more or less cloning only only implementation change.
As we know that we have different type of RAID but all the raid type are not suitable for the all application. We select raid type depending on the application and IO load/Usages. Actually there are so many factor involved before you select suitable raid type for any application. I am trying to give brief idea in order to select best raid type for any application. You can select raid type depending on your environment.
When to Use RAID 5
RAID 5 is favored for messaging, data mining, medium-performance media serving, and RDBMS implementations in which the DBA is effectively using read-ahead and write-behind. If the host OS and HBA are capable of greater than 64 KB transfers, RAID 5 is a compelling choice.
These application types are ideal for RAID 5:
1) Random workloads with modest IOPS-per-gigabyte requirements
2) High performance random I/O where writes represent 30 percent or less of the workload
3) A DSS database in which access is sequential (performing statistical analysis on sales records)
4) Any RDBMS table space where record size is larger than 64 KB and access is random (personnel records with binary content, such as
photographs)
5) RDBMS log activity
6) Messaging applications
7) Video/Media
When to Use RAID 1/0
RAID 1/0 can outperform RAID 5 in workloads that use very small, random, and write-intensive I/O—where more than 30 percent of the workload is random writes. Some examples of random, small I/O workloads are:
1) High-transaction-rate OLTP
2) Large messaging installations
3) Real-time data/brokerage records
4) RDBMS data tables containing small records that are updated frequently (account balances)
5) If random write performance is the paramount concern, RAID 1/0 should be used for these applications.
When to Use RAID 3
RAID 3 is a specialty solution. Only five-disk and nine-disk RAID group sizes are valid for CLARiiON RAID 3. The target profile for RAID 3 is large and/or sequential access.
Since Release 13, RAID 3 LUNs can use write cache. The restrictions previously made for RAID 3—single writer, perfect alignment with the RAID stripe—are no longer necessary, as the write cache will align the data. RAID 3 is now more effective with multiple writing streams, smaller I/O sizes (such as 64 KB) and misaligned data.
RAID 3 is particularly effective with ATA drives, bringing their bandwidth performance up to Fibre Channel levels.
When to Use RAID 1
With the advent of 1+1 RAID 1/0 sets in Release 16, there is no good reason to use RAID 1. RAID 1/0 1+1 sets are expandable, whereas RAID 1 sets are not.
About Me
- Diwakar
- Sr. Solutions Architect; Expertise: - Cloud Design & Architect - Data Center Consolidation - DC/Storage Virtualization - Technology Refresh - Data Migration - SAN Refresh - Data Center Architecture More info:- diwakar@emcstorageinfo.com