Any disk drive from any manufacturer can exhibit sector read errors due to media defects. This is a known and accepted reality in the disk drive industry, particularly with the high recording densities employed by recent products. These media defects only affect the drive’s ability to read data from a specific sector; they do not indicate general unreliability of the disk drive. The disk drives that EMC purchases from its vendors are within specifications for soft media errors according to the vendors as well as EMC’s own Supply Base Management organization.

Prior to shipment from manufacturing, disk drives have a surface scan operation performed that detects and reallocates any sectors that are defective. This operation is run to reduce the possibility that a disk drive will experience soft media errors in operation. Improper handling after leaving EMC manufacturing can lead to the creation of additional media defects, as can improper drive handling during installation or replacement.

When a disk drive encounters trouble reading data from a sector, the drive will automatically attempt recovery of the data through its various internal methods. Whether or not the drive is eventually successful at reading the sector, the drive will report the event to FLARE. FLARE will in turn log this event as a “Soft Media Error” (event code 820) and will re-allocate the sector to a spare physical location on the drive (this does not affect the logical address of the sector). In the event that the drive was eventually successful at reading the sector, (event coded 820 with sub-code of 22), FLARE will directly write that data into the new physical location. If the correct sector data was not available, (event code 820 with sub-code of 05). There are certain tools from EMC to verify disk and check detail about these Soft Media Errors like sniffer/FBI Tool/SMART Technology etc..

We have discussed about Virtual Provisioning of Symmetrix in previous post. Now, we will discuss about Virtual Provisioning Configuration. You have to understand your storage environment before you run the below mentioned command.

Configuring and viewing data devices and pools:

Data Devices are devices with datadev attribute. Only Data Devices can be part of Thin Pool. Devices with different protection scheme can be supported for use in Thin Pools. It is depending on specific Enginuity level. All devices with the datadev attribute are used for exclusively for populating Thin Pools.

Create command file (Thin.txt) with following syntax:

create dev count=10, config=2-Way-Mir, attribute=datadev, emulation=FBA, size=4602;

# symconfigure -sid 44 -file thin.txt commit –v –nop

A Configuration Change operation is in progress. Please wait...
Establishing a configuration change session...............Established.
Processing symmetrix 000190101244
{
create dev count=10, size=4602, emulation=FBA,
config=2-Way Mir, mvs_ssid=0000, attribute=datadev;
}
Performing Access checks..................................Allowed.
Checking Device Reservations..............................Allowed.
Submitting configuration changes..........................Submitted
…..
…..
…..
Step 125 of 173 steps.....................................Executing.
Step 130 of 173 steps.....................................Executing.
Local: COMMIT............................................Done.
Terminating the configuration change session..............Done.

The configuration change session has successfully completed.

# symdev list -sid 44 -datadev

Symmetrix ID: 000190101244
Device Name Directors Device
--------------------------- ------------- -------------------------------------
Sym Physical SA :P DA :IT Config Attribute Sts Cap(MB)
--------------------------- ------------- -------------------------------------
10C4 Not Visible ???:? 01A:C4 2-Way Mir N/A (DT) RW 4314
10C5 Not Visible ???:? 16C:D4 2-Way Mir N/A (DT) RW 4314
10C6 Not Visible ???:? 15B:D4 2-Way Mir N/A (DT) RW 4314
10C7 Not Visible ???:? 02D:C4 2-Way Mir N/A (DT) RW 4314
10C8 Not Visible ???:? 16A:D4 2-Way Mir N/A (DT) RW 4314
10C9 Not Visible ???:? 01C:C4 2-Way Mir N/A (DT) RW 4314
10CA Not Visible ???:? 16B:C4 2-Way Mir N/A (DT) RW 4314


Thin Pool can be created using symconfigure command and without adding data devices:

# symconfigure -sid 44 -cmd "create pool Storage type=thin;" commit –nop

Once pool is created, data devices can be added to the pool and enabled:

EMC announced Symmetrix V-Max recently which is based on virtual matrix. Symmetrix V-Max runs on latest Enginuity 5874. The 5874 plateform support Symmetricx V-Max Emulation level 121 and service processor level 102. The modular design of V-Max series Enginuity 5874 ensure flow and integrity between hardware component. Symmetrix Management Console 7.0 (SMC) integrated in service processor. SMC allows you to provision in 5 steps. Enginuity 5874 provides following enhanced feature:

RAID Virtual Architecture :-Enginuity 5874 introduces a new RAID implementation infrastructure. This enhancement increases configuration options in SRDF environments by reducing the number of mirror positions for RAID 1 and RAID 5 devices. This enhancement also provides additional configuration options, for example, allowing LUN migrations in a Concurrent or Cascaded SRDF environment. You can migrate device between raid level/tier level.

Large Volume :-Support Enginuity 5874 increases the maximum volume size to approximately 240 GB for open systems environments and 223 GB for mainframe environments. DMX-4 allows max only 65 GB hyper.

512 Hyper Volumes per Physical Drive :- Enginuity 5874 supports up to 512 hyper volumes on a single drive, twice as much as Enginuity 5773(DMX-3/4). You can improve flexibility and capacity utilization by configuring more granular volumes that more closely meet their space requirements and leave less space unused.

Autoprovisioning Groups :- Auto provisioning Groups reduce the complexity of Symmetrix device masking by allowing the creation of groups of host initiators, front-end ports and storage volumes. This provides the ability to mask storage to multiple paths instead of one path at a time, reducing the time required and potential for error for consolidated and virtualized server environments. You can script and schedule batch operation using SMC 7.0.

Concurrent Provisioning and Scripts :- Concurrent configuration changes provide the ability to run scripts concurrently instead of serially, improving system management efficiency. Uses for concurrent configuration changes include parallel device mapping, unmapping, metavolume form and dissolve from different hosts.

Dynamic Provisioning Enhancements :- Dynamic configuration changes allow the dynamic setting of the BCV and dynamic SRDF device attributes and decrease the impact to hosts I/O during the corresponding configuration manager operations.

New Management Integration :- With Enginuity 5874, the Symmetrix Management Console (SMC) and SMI-S provider are available on the Symmetrix system's Service Processor. This frees host resources and simplifies Symmetrix system management; by attaching the Service Processor to your network, you can open SMC and manage the Symmetrix system from anywhere in their enterprise.

Enhanced Virtual LUN :- With Enginuity 5874, Virtual LUN technology provides the ability to non disruptively change the physical location on disk, and/or the protection type of Symmetrix logical volumes and allows the migration of open systems, Mainframe and System i volumes to unallocated storage or to existing volumes. Organizations can respond more easily to changing business requirements when using tiered storage in the array.
Enhanced Virtual Provisioning Draining:- With Enginuity 5874, Virtual Provisioning support for draining of data devices allows the nondisruptive removal of one or more data devices from a thin device pool, without losing the data that belongs to the thin devices. This feature allows for improved capacity utilization.
Enhanced Virtual Provisioning:- Support for all RAID Types With Enginuity 5874, Virtual Provisioning no longer restricts RAID 5 data devices. Virtual Provisioning now supports all data device RAID types.

To setup Replication Manager you must perform the following tasks:

1) Verify that your environment has the minimum required storage hardware and that the hardware has a standard CLARiiON configuration.
2) Confirm that your Replication Manager hosts (server and clients) are connected to the CLARiiON environment through a LAN connection.
3) Zone the fibre switch appropriately (if applicable). The clients must be able to access all storage arrays they are using and the mount hosts must be able to access all storage in the EMC Replication Storage group.
4) Install all necessary software on each Replication Manager client , server, and mount host. Also install the appropriate firmware and software on the CLARiiON array itself.
5) Modify the clarcnfg file to represent all CLARiiON Arrays.
6) On Solaris hosts, verify that there are enough entries in the sd.conf file to support all dynamic mounts of replica LUNs.
7) Install Replication Manager Client software on each client that has an application with data from which you plan to create replicas.
8) Create a new user account on the CLARiiON and give this new account privileges as an administrator. Replication Manager can use this account to access and manipulate the CLARiiON as necessary.
9) Grant storage processor privileges through the agent tab of storage processor properties to allow aviCLI.jar commands from Replication Manager Client Control Daemon (irccd) process to reach the CLARiiON storage array.
10) Update the agent.config file on each client where Replication Manager is installed to include a link to: user system@ where is the IP address of a storage processor. You should add a link to both storage processors in each StorageWorks array that you are using.
11) Verify that you have Clone Private LUNs set up on your CLARiiON storage array. --Create a mount storage group for each mount host and make sure that storage group contains at least one LUN, and that the LUN is visible to the mount host. This LUN does not have to be dedicated or remain empty; you can use it for any purpose. However if no LUNs are visible to the Replication Manager mount host, Replication Manager will not operate.
12) Create a storage group named EMC Replication Storage and populate it with free LUNs that you created in advance for Replication Manager to use for storing replicas.
13) Start the Replication Manager Console and connect to your Replication Manager server. You must perform the following steps:
a) Register all Replication Manager clients
b) Run Discover Arrays
c) Run Configure Array for each array discovered
d) Run Discover Storage for each array discovered

The following rules and recommendations CX systems:
1)
You cannot use any of the disks 000 through 004 (enclosure 0, loop 0, disks 0-4) as a hot spare in a CX-Series system.
2) The hardware reserves several gigabytes on each of disks 000 through 004 for the cache vault and internal tables. To conserve disk space, you should avoid binding any other disk into a RAID Group that includes any of these disks. Any disk you include in a RAID Group with a cache disk 000-004 is bound to match the lower unreserved capacity, resulting in lost storage of several gigabytes per disk.
3) Each disk in the RAID Group should have the same capacity. All disks in a Group are bound to match the smallest capacity disk, and you could waste disk space. The first five drives (000-004) should always be the same size.
4) You cannot mix ATA (Advanced Technology Attachment) and Fibre Channel disk drives within a RAID Group.
5) Hot spares for Fibre Channel drives must be Fibre Channel drives; ATA drives require ATA hot spares.
6) If a storage system will use disks of different speeds (for example, 10K and 15K rpm), then EMC recommends that you use disks of the same speed throughout each 15-disk enclosure. For any enclosure, the hardware allows one speed change within an enclosure, so if need be, you may use disks of differing speeds. Place the higher speed drives in the first (leftmost) drive slot(s).
7) You should always use disks of the same speed and capacity in any RAID Group.
8) Do not use ATA drives to store boot images of an operating system. You must boot host operating systems from a Fibre Channel drive.

The following major configuration steps for the storage, server and switches necessary for implementing the CLARiiON.
  1. Install Fibre Channel HBAs in all systems
  2. Install EMC CLARiiON LP8000 port driver ( For Emulex) on all system
  3. Connect each host to both switches ( Broace/Cisco/McData)
  4. Connect SP1-A and SP2-A to the first switch
  5. Connect SP1-B and SP2-B to the second switch
  6. Note:- You can use cross SP connection for HA and connect SPA1 and SPB1 to first switch and SPB2 and SPA2 to the second switch.
  7. Install Operating System on windows/solaris/linux/Vmware hosts
  8. Connect all hosts to the Ethernet LAN
  9. Install EMC CLARiiON Agent Configurator/Navisphere Agent on all hosts
  10. Install EMC CLARiiON ATF software on all hosts if you are not using EMC powerpath fail-over software otherwise install supported version EMC Powerpath on all hosts.
  11. Install the Navisphere Manager on one of the NT hosts
  12. Configure Storage Groups using the Navisphere Manager
  13. Assign Storage groups to hosts as dedicated storage/Cluster/Shared Storage
  14. Install cluster software on host.
  15. Test the cluster for node failover
  16. Create Raid Group with protection as application required(raid5,raid1/0 etc)
  17. Bind LUN according to application device layout requirement.
  18. Add LUN to storage Group.
  19. Zone SP port and Host HBA on both switch
  20. Register Host on CLARiiON using Navisphere Manager.
  21. Add all hosts to storage group.
  22. Scan the devices on host.
  23. Label and Format the device on host.

EMC is introducing a revolutionary new Virtual Matrix architecture within the Symmetrix system family which will redefine high-end storage capabilities. This new Symmetrix V-Max system architecture allows for unprecedented levels of scalability. Robust high availability is enabled by clustering, with fully redundant V-Max Engines and interconnects. The Symmetrix V-Max series is built on a revolutionary Virtual Matrix architecture. Symmetrix V-Max, along with Enginuity 5874, delivers unprecedented performance, availability, functionality, and economic advantages. The Symmetrix V-Max series, with the unique scale-out Virtual Matrix architecture, can be configured with 96 to 2,400 drives and usable capacity up to 2 PB. Systems provide up to 944 GB of mirrored global memory and up to 128 Fibre Channel ports, 64 FICON ports, 64 Gigabit Ethernet ports, or 64 iSCSI connections. The Symmetrix V-Max series is a distributed multi-node storage system that can scale from one to eight highly available V-Max Engines. Systems are configured around a central system bay and adjacent storage bays of up to 240 disks each. A full range of drive options is available scaling from ultra-fast enterprise Flash drives, to Fibre Channel drives; to the highest capacity 1 TB SATA II drives. Enhanced device configuration and replication operations result in simpler, faster and more efficient management of large virtual and physical environments. This allows organizations to save on administrative costs, reduce the risk of operational errors and respond rapidly to changing business requirements.Enginuity 5874 also introduces cost and performance optimized business continuity solutions. This includes the zero RPO 2-site long distance solution.

RAID Virtual Architecture (RVA) - Enginuity 5874 introduces a new RAID implementation infrastructure. This enhancement increases configuration options in SRDF environments by reducing the number of mirror positions for RAID 1 and RAID 5 devices. This enhancement also provides additional configuration options, for example, allowing LUN migrations in a Concurrent or Cascaded SRDF environment. Large Volume Support Enginuity 5874 increases the maximum volume size to approximately 240 GB for open systems environments and 223 GB for mainframe environments.

512 Hyper Volumes per Physical Drive: - Enginuity 5874 supports up to 512 hyper volumes on a single drive, twice as much as Enginuity 5773. Customers can improve flexibility and capacity utilization by configuring more granular volumes that more closely meet their space requirements and leave less space unused. Autoprovisioning Groups Autoprovisioning Groups reduce the complexity of Symmetrix device masking by allowing the creation of groups of host initiators, front-end ports and storage volumes. This provides the ability to mask storage to multiple paths instead of one path at a time, reducing the time required and potential for error for consolidated and virtualized server environments. Concurrent Provisioning and Scripts Concurrent configuration changes provide the ability to run scripts concurrently instead of serially, improving system management efficiency. Uses for concurrent configuration changes include parallel device mapping, unmapping, metavolume form and dissolve from different hosts.

Dynamic Provisioning Enhancements: - Dynamic configuration changes allow the dynamic setting of the BCV and dynamic SRDF device attributes and decrease the impact to hosts I/O during the corresponding configuration manager operations.

New Management Integration: - With Enginuity 5874, the Symmetrix Management Console (SMC) and SMI-S provider are available on the Symmetrix system's Service Processor. This frees host resources and simplifies Symmetrix system management; by attaching the Service Processor to a customer network, the customer can open SMC and manage the Symmetrix system from anywhere in their enterprise.

Enhanced Virtual LUN :- With Enginuity 5874, Virtual LUN technology provides the ability to nondisruptively change the physical location on disk, and/or the protection type of Symmetrix logical volumes and allows the migration of open systems, Mainframe and System i volumes to unallocated storage or to existing volumes. Organizations can respond more easily to changing business requirements when using tiered storage in the array.

Enhanced Virtual Provisioning:- Draining With Enginuity 5874, Virtual Provisioning support for draining of data devices allows the nondisruptive removal of one or more data devices from a thin device pool, without losing the data that belongs to the thin devices. This feature allows for improved capacity utilization.Enhanced Virtual Provisioning: Support for all RAID Types with Enginuity 5874, Virtual Provisioning no longer restricts RAID 5 data devices. Virtual Provisioning now supports all data device RAID types.

About Me

My photo
Sr. Solutions Architect; Expertise: - Cloud Design & Architect - Data Center Consolidation - DC/Storage Virtualization - Technology Refresh - Data Migration - SAN Refresh - Data Center Architecture More info:- diwakar@emcstorageinfo.com
Blog Disclaimer: “The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.”
EMC Storage Product Knowledge Sharing