EMC announced Symmetrix V-Max recently which is based on virtual matrix. Symmetrix V-Max runs on latest Enginuity 5874. The 5874 plateform support Symmetricx V-Max Emulation level 121 and service processor level 102. The modular design of V-Max series Enginuity 5874 ensure flow and integrity between hardware component. Symmetrix Management Console 7.0 (SMC) integrated in service processor. SMC allows you to provision in 5 steps. Enginuity 5874 provides following enhanced feature:

RAID Virtual Architecture :-Enginuity 5874 introduces a new RAID implementation infrastructure. This enhancement increases configuration options in SRDF environments by reducing the number of mirror positions for RAID 1 and RAID 5 devices. This enhancement also provides additional configuration options, for example, allowing LUN migrations in a Concurrent or Cascaded SRDF environment. You can migrate device between raid level/tier level.

Large Volume :-Support Enginuity 5874 increases the maximum volume size to approximately 240 GB for open systems environments and 223 GB for mainframe environments. DMX-4 allows max only 65 GB hyper.

512 Hyper Volumes per Physical Drive :- Enginuity 5874 supports up to 512 hyper volumes on a single drive, twice as much as Enginuity 5773(DMX-3/4). You can improve flexibility and capacity utilization by configuring more granular volumes that more closely meet their space requirements and leave less space unused.

Autoprovisioning Groups :- Auto provisioning Groups reduce the complexity of Symmetrix device masking by allowing the creation of groups of host initiators, front-end ports and storage volumes. This provides the ability to mask storage to multiple paths instead of one path at a time, reducing the time required and potential for error for consolidated and virtualized server environments. You can script and schedule batch operation using SMC 7.0.

Concurrent Provisioning and Scripts :- Concurrent configuration changes provide the ability to run scripts concurrently instead of serially, improving system management efficiency. Uses for concurrent configuration changes include parallel device mapping, unmapping, metavolume form and dissolve from different hosts.

Dynamic Provisioning Enhancements :- Dynamic configuration changes allow the dynamic setting of the BCV and dynamic SRDF device attributes and decrease the impact to hosts I/O during the corresponding configuration manager operations.

New Management Integration :- With Enginuity 5874, the Symmetrix Management Console (SMC) and SMI-S provider are available on the Symmetrix system's Service Processor. This frees host resources and simplifies Symmetrix system management; by attaching the Service Processor to your network, you can open SMC and manage the Symmetrix system from anywhere in their enterprise.

Enhanced Virtual LUN :- With Enginuity 5874, Virtual LUN technology provides the ability to non disruptively change the physical location on disk, and/or the protection type of Symmetrix logical volumes and allows the migration of open systems, Mainframe and System i volumes to unallocated storage or to existing volumes. Organizations can respond more easily to changing business requirements when using tiered storage in the array.
Enhanced Virtual Provisioning Draining:- With Enginuity 5874, Virtual Provisioning support for draining of data devices allows the nondisruptive removal of one or more data devices from a thin device pool, without losing the data that belongs to the thin devices. This feature allows for improved capacity utilization.
Enhanced Virtual Provisioning:- Support for all RAID Types With Enginuity 5874, Virtual Provisioning no longer restricts RAID 5 data devices. Virtual Provisioning now supports all data device RAID types.

To setup Replication Manager you must perform the following tasks:

1) Verify that your environment has the minimum required storage hardware and that the hardware has a standard CLARiiON configuration.
2) Confirm that your Replication Manager hosts (server and clients) are connected to the CLARiiON environment through a LAN connection.
3) Zone the fibre switch appropriately (if applicable). The clients must be able to access all storage arrays they are using and the mount hosts must be able to access all storage in the EMC Replication Storage group.
4) Install all necessary software on each Replication Manager client , server, and mount host. Also install the appropriate firmware and software on the CLARiiON array itself.
5) Modify the clarcnfg file to represent all CLARiiON Arrays.
6) On Solaris hosts, verify that there are enough entries in the sd.conf file to support all dynamic mounts of replica LUNs.
7) Install Replication Manager Client software on each client that has an application with data from which you plan to create replicas.
8) Create a new user account on the CLARiiON and give this new account privileges as an administrator. Replication Manager can use this account to access and manipulate the CLARiiON as necessary.
9) Grant storage processor privileges through the agent tab of storage processor properties to allow aviCLI.jar commands from Replication Manager Client Control Daemon (irccd) process to reach the CLARiiON storage array.
10) Update the agent.config file on each client where Replication Manager is installed to include a link to: user system@ where is the IP address of a storage processor. You should add a link to both storage processors in each StorageWorks array that you are using.
11) Verify that you have Clone Private LUNs set up on your CLARiiON storage array. --Create a mount storage group for each mount host and make sure that storage group contains at least one LUN, and that the LUN is visible to the mount host. This LUN does not have to be dedicated or remain empty; you can use it for any purpose. However if no LUNs are visible to the Replication Manager mount host, Replication Manager will not operate.
12) Create a storage group named EMC Replication Storage and populate it with free LUNs that you created in advance for Replication Manager to use for storing replicas.
13) Start the Replication Manager Console and connect to your Replication Manager server. You must perform the following steps:
a) Register all Replication Manager clients
b) Run Discover Arrays
c) Run Configure Array for each array discovered
d) Run Discover Storage for each array discovered

The following rules and recommendations CX systems:
1)
You cannot use any of the disks 000 through 004 (enclosure 0, loop 0, disks 0-4) as a hot spare in a CX-Series system.
2) The hardware reserves several gigabytes on each of disks 000 through 004 for the cache vault and internal tables. To conserve disk space, you should avoid binding any other disk into a RAID Group that includes any of these disks. Any disk you include in a RAID Group with a cache disk 000-004 is bound to match the lower unreserved capacity, resulting in lost storage of several gigabytes per disk.
3) Each disk in the RAID Group should have the same capacity. All disks in a Group are bound to match the smallest capacity disk, and you could waste disk space. The first five drives (000-004) should always be the same size.
4) You cannot mix ATA (Advanced Technology Attachment) and Fibre Channel disk drives within a RAID Group.
5) Hot spares for Fibre Channel drives must be Fibre Channel drives; ATA drives require ATA hot spares.
6) If a storage system will use disks of different speeds (for example, 10K and 15K rpm), then EMC recommends that you use disks of the same speed throughout each 15-disk enclosure. For any enclosure, the hardware allows one speed change within an enclosure, so if need be, you may use disks of differing speeds. Place the higher speed drives in the first (leftmost) drive slot(s).
7) You should always use disks of the same speed and capacity in any RAID Group.
8) Do not use ATA drives to store boot images of an operating system. You must boot host operating systems from a Fibre Channel drive.

The following major configuration steps for the storage, server and switches necessary for implementing the CLARiiON.
  1. Install Fibre Channel HBAs in all systems
  2. Install EMC CLARiiON LP8000 port driver ( For Emulex) on all system
  3. Connect each host to both switches ( Broace/Cisco/McData)
  4. Connect SP1-A and SP2-A to the first switch
  5. Connect SP1-B and SP2-B to the second switch
  6. Note:- You can use cross SP connection for HA and connect SPA1 and SPB1 to first switch and SPB2 and SPA2 to the second switch.
  7. Install Operating System on windows/solaris/linux/Vmware hosts
  8. Connect all hosts to the Ethernet LAN
  9. Install EMC CLARiiON Agent Configurator/Navisphere Agent on all hosts
  10. Install EMC CLARiiON ATF software on all hosts if you are not using EMC powerpath fail-over software otherwise install supported version EMC Powerpath on all hosts.
  11. Install the Navisphere Manager on one of the NT hosts
  12. Configure Storage Groups using the Navisphere Manager
  13. Assign Storage groups to hosts as dedicated storage/Cluster/Shared Storage
  14. Install cluster software on host.
  15. Test the cluster for node failover
  16. Create Raid Group with protection as application required(raid5,raid1/0 etc)
  17. Bind LUN according to application device layout requirement.
  18. Add LUN to storage Group.
  19. Zone SP port and Host HBA on both switch
  20. Register Host on CLARiiON using Navisphere Manager.
  21. Add all hosts to storage group.
  22. Scan the devices on host.
  23. Label and Format the device on host.

EMC is introducing a revolutionary new Virtual Matrix architecture within the Symmetrix system family which will redefine high-end storage capabilities. This new Symmetrix V-Max system architecture allows for unprecedented levels of scalability. Robust high availability is enabled by clustering, with fully redundant V-Max Engines and interconnects. The Symmetrix V-Max series is built on a revolutionary Virtual Matrix architecture. Symmetrix V-Max, along with Enginuity 5874, delivers unprecedented performance, availability, functionality, and economic advantages. The Symmetrix V-Max series, with the unique scale-out Virtual Matrix architecture, can be configured with 96 to 2,400 drives and usable capacity up to 2 PB. Systems provide up to 944 GB of mirrored global memory and up to 128 Fibre Channel ports, 64 FICON ports, 64 Gigabit Ethernet ports, or 64 iSCSI connections. The Symmetrix V-Max series is a distributed multi-node storage system that can scale from one to eight highly available V-Max Engines. Systems are configured around a central system bay and adjacent storage bays of up to 240 disks each. A full range of drive options is available scaling from ultra-fast enterprise Flash drives, to Fibre Channel drives; to the highest capacity 1 TB SATA II drives. Enhanced device configuration and replication operations result in simpler, faster and more efficient management of large virtual and physical environments. This allows organizations to save on administrative costs, reduce the risk of operational errors and respond rapidly to changing business requirements.Enginuity 5874 also introduces cost and performance optimized business continuity solutions. This includes the zero RPO 2-site long distance solution.

RAID Virtual Architecture (RVA) - Enginuity 5874 introduces a new RAID implementation infrastructure. This enhancement increases configuration options in SRDF environments by reducing the number of mirror positions for RAID 1 and RAID 5 devices. This enhancement also provides additional configuration options, for example, allowing LUN migrations in a Concurrent or Cascaded SRDF environment. Large Volume Support Enginuity 5874 increases the maximum volume size to approximately 240 GB for open systems environments and 223 GB for mainframe environments.

512 Hyper Volumes per Physical Drive: - Enginuity 5874 supports up to 512 hyper volumes on a single drive, twice as much as Enginuity 5773. Customers can improve flexibility and capacity utilization by configuring more granular volumes that more closely meet their space requirements and leave less space unused. Autoprovisioning Groups Autoprovisioning Groups reduce the complexity of Symmetrix device masking by allowing the creation of groups of host initiators, front-end ports and storage volumes. This provides the ability to mask storage to multiple paths instead of one path at a time, reducing the time required and potential for error for consolidated and virtualized server environments. Concurrent Provisioning and Scripts Concurrent configuration changes provide the ability to run scripts concurrently instead of serially, improving system management efficiency. Uses for concurrent configuration changes include parallel device mapping, unmapping, metavolume form and dissolve from different hosts.

Dynamic Provisioning Enhancements: - Dynamic configuration changes allow the dynamic setting of the BCV and dynamic SRDF device attributes and decrease the impact to hosts I/O during the corresponding configuration manager operations.

New Management Integration: - With Enginuity 5874, the Symmetrix Management Console (SMC) and SMI-S provider are available on the Symmetrix system's Service Processor. This frees host resources and simplifies Symmetrix system management; by attaching the Service Processor to a customer network, the customer can open SMC and manage the Symmetrix system from anywhere in their enterprise.

Enhanced Virtual LUN :- With Enginuity 5874, Virtual LUN technology provides the ability to nondisruptively change the physical location on disk, and/or the protection type of Symmetrix logical volumes and allows the migration of open systems, Mainframe and System i volumes to unallocated storage or to existing volumes. Organizations can respond more easily to changing business requirements when using tiered storage in the array.

Enhanced Virtual Provisioning:- Draining With Enginuity 5874, Virtual Provisioning support for draining of data devices allows the nondisruptive removal of one or more data devices from a thin device pool, without losing the data that belongs to the thin devices. This feature allows for improved capacity utilization.Enhanced Virtual Provisioning: Support for all RAID Types with Enginuity 5874, Virtual Provisioning no longer restricts RAID 5 data devices. Virtual Provisioning now supports all data device RAID types.

Veritas Disk Group Configuration Guidelines:-

1) Use multiple Disk Groups—preferably a minimum of four; place the DATA, REDO, TEMP, UNDO, and FRA archive logs in different (separate) Veritas Disk Groups

2) Optimally, use RAID 1 for tier 1 storage

3) Configure Disk Groups so that each contains LUNs of the same size and performance characteristics

Distribute Veritas Disk Group members over as many spindles as is practical for the site’s configuration and operational needs

Data Striping and Load Balancing:-

1) Veritas software level striping: layout=stripe ncols=10 stripeunit=128k

2) Storage-level striping further parallelizes the individual I/O requests within storage

3) Using the storage RAID protection, the amount of I/O traffic (host to storage) is reduced

4) EMC PowerPath should be used for load balancing and path failover

5) Use of metavolumes is optional

a) There is an upper limit on the number of LUNs that a host can address—typically ranging from 256 to 1,024 per HBA.

b) When these limits are reached, metavolumes are a convenient way to access more Symmetrix hypervolume.


Volume Configuration with Veritas (Hypervolumes):


1) Created 5 Veritas Disk Groups

2) Five Disk Groups are used because this number provides better granularity for performance planning

3) The use of five Disk Groups also provides increased flexibility when planning for the utilization of EMC replication technology within the context of an enterprise-scale workload

4) Having five Disk Groups permits the placement of data onto different storage tiers if desired

Hypervolume

Purpose

Size

1

DATA

32 GB

2

REDO

400 MB

3

DATA

32 GB

4

FRA

30 GB

5

TEMP

10 GB

6

FRA

30 GB

Average Disk Utilization for Raid 1 should be below 150 IOPS per disk and should not go above 200 IOPS per disk as per below configuration.

-- 80 physical disks (40 mirrored pairs)

-- 240 devices visible to Veritas

-- Average user count ~ 16,000

About Me

My photo
Sr. Solutions Architect; Expertise: - Cloud Design & Architect - Data Center Consolidation - DC/Storage Virtualization - Technology Refresh - Data Migration - SAN Refresh - Data Center Architecture More info:- diwakar@emcstorageinfo.com
Blog Disclaimer: “The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.”
EMC Storage Product Knowledge Sharing