Performance Tuning

Posted by Diwakar ADD COMMENTS

Performance tuning has always been a challenge for System administrators and Database administrators for a long time. As virtualization continues to grow in every aspect of the IT infrastructure, tuning the OS, DB or storage tends to become even more complex.

Apart from CPU power and memory size, disk subsystem handles the movement of data on the computer system and has a powerful influence on its overall response. Also performance disk layout must also be designed to provide appropriate data protection with overall cost in mind.
Planning ahead is the most effective practice to avoid performance issues later on while also providing the flexibility to make adjustments before committing the changes into production.

Some fundamental disk terminology:

Alignment –Data block addresses compared to RAID stripe addresses
Coalesce – To bunch together multiple smaller IOs in one larger one
Concurrency – Multiple threads writing to disk simultaneously
Flush – Data in cache written to disk
Multi-pathing – Concurrent paths to same disk storage

How to choose a RAID type:

The concept of RAID is comprehensible to most in the storage industry. To extract the best disk performance, choosing the right RAID type based on IO patterns is very important. Most commonly observed IO patterns are listed later in the article. Since RAID 5 and RAID 1/0 are the most commonly used RAID types in the industry lets focus on these two for now.



RAID 1/0 – This RAID type works best for random IOs pattern especially for write-intensive applications. If the writes are above 20% go and RAID 1/0

RAID 5 – Compared to RAID1/0, for same number of physical spindles, the performance of the two is very close in a read-heavy environment. For instance, a 2+2 RAID 1/0 (total 4 disks) will perform similar to a RAID 5 3+1 (total 4 disks).

On the other hand, if one can afford to ignore the number of physical spindles and consider only the usable capacity (RAID 10 3+3 v/s RAID 5 3+1), RAID 1/0 is the way to go.

RAID1/0 has higher cost associated with it while RAID 5 provides a more efficient use of disk space. The only con associated with RAID 5 is the re-sync time after a disk replacement.

Number of Operations per RAID type:

RAID 1 and 1/0 require that two disks to be written for each host initiated write.
Total IO=host reads + 2* host writes
RAID5 (4+1) requires 4 operations per host write. If the data is sequential, do one large stripe write
A r5 write requires 2 reads and 2 writes
Total IO=host reads + 4*host writes

To re-cap,
. Parity RAID operations increase the disk load. For similar capacity, go for RAID1/0
. RAID5 is better than RAID1/0 for large sequential IO
. RAID 3 is much effective as you can now use cache with RAID 3
. RAID1/0 is best for mixed IO types.

Commonly noticed Database IO patterns:

OLTP Log – sequential
OLTP – Data – random
Bulk Insert - sequential
Backup – sequential read/write
Restore – sequential read/write
Re-index - sequential read/write
Create Database - Sequential read


Knowing your IO personality is important to choose which RAID type to consider. Using the above notes and application IO patterns, one can make a choice on the disk layout. Some characteristics of the IOs to consider are:
- IO size
- IO Read/Write ratio
- Type of IO - Random v/s Sequential
- Snapshots / Clones
- Application type – OLTP ?
- Bandwidth requirements
- Estimated Growth


Once you decide the type of RAID to use, you can fine tune the disk system by following vendor recommended practices like:

-- Lun distribution
-- Distribute the IO load evenly across available disk drives
-- Avoid using Primary and BCv/Snapshots luns on same physical spindles. (The best way to avoid this is to have separate disk groups for primary data disks and BCVs/Clones/Snapshots)
-- Consider using Meta or host stripping

Cache:
Disk writes are more costly and thus must be given bigger share of cache
Match cache page size to IO size to prevent multiple Ios

Stripe Size:
Retain default stripe size of 64KB
Larger the stripe size, more cache size required and longer it takes to rebuild

FC or iSCSI:
FC Best for large IO and high bandwidth
FC More expensive
iSCSI involves lowest cost
iSCSI works best for OLTP, small block IO

Some Best Practice recommendations from Microsoft:

Microsoft has laid a few guidelines for designing the SQL database
Use RAID1+0 for log files
Isolate log from data at physical disk level
Use RAID 1+0 for tempdb

Revisiting the above recommendations periodically to stay on track, will go a long way in extracting the best out of disks

--Contributed by Suraj Kawlekar


From its inception, Symmetrix was designed with the flexibility to incorporate the latest technology in disk drives, memory and other components. This effort has enabled the storage platform to evolve to meet the ever-increasing data demands of enterprises. and has provided customers with unparalleled investment protection. The first-generation Symmetrix 4400 Integrated Cached Disk Array (ICDA), with a total capacity of 24 gigabytes, was introduced in 1990. The seventh-generation system, the Symmetrix DMX-3, was introduced in July 2005 and features a Direct Matrix Architecture® and maximum capacity of one petabyte (1,024 terabytes). The Symmetrix platform has continued to improve and evolve to meet the needs of data-intensive organizations worldwide and remains the most successful intelligent storage platform in history.

With more than 68,000 systems shipped from its introduction in 1990 through the end of June, 2005 and with more than 400 EMC patents covering its technology, Symmetrix remains the high-end storage market leader and continues to set the standard for mission-critical high-end storage innovation.

1990 – Symmetrix 4200 – ICDA (Integrated Cached Disk Array) Technology, Total Capacity 24 GB

1991 – Symmetrix 4200 – 4Mb DRAM, 5.25 HDAs, Mirroring RAID 1

1992 – Symmetrix 4400 - - Dynamic Sparing, RMP Call Home

1993 – Symmetrix 4800 – 16MB DRAM, 1 GB Global Memory,Non-disruptive microcode, Hypervolume Extension.

1994 – Symmetrix 5500-3 – SRDF

1995 – Symmetrix 3.0 Open Symmetrix- FWD SCSI- attach, 3.5’’ HDAs, RAID Protection, SRDF Host Component, Symmetrix Manager

1996 – Symmetrix ESP – Mix CKD/FBA

1997 – Symmetrix 4.0 – TimeFinder, DataReach, InfoMover, Celerra, FDRSOS, Fibre Channel, PowerPath, UltraSCSI, DMSP

1998 – Symmetrix 4.8 – FC-AL/FC-SW, Symmetrix Optimizer

1999 – Symmetrix 5.0 – 333 MHz PPC, 181 GB disks, QoS Controls

2000- Symmetrix 5.5 – 2 GB fibre Channel, 400 MHz PPC

2000-2001 – Symmetrix DMX – Direct Matrix, 500 MHz PPC, 2 GB FC, Back-End Parity RAID

2001 – 2002 – Symmetrix DMX – 2 GB FICON, Gigabit Ethernet SRDF, iSCSI, SRDF/A, TimeFinder/Snap

2003- Symmetrix DMX-2 – 1 Ghz PPC, RAID 5 Data Protection, 32 GB Memory Directors

2003-2004- Symmetrix DMX-2- SRDF Mode Change, Concurrent SRDF, SRDF/Star, TimeFinder/Clone, Open Replicator

2005 – Symmetrix DMX-3 – 8 Processors/Directors, 1.3 GHz PPC, Low Cost FC Disks, Incremental Scalable, Upto 2400 disks, Open Migrator/LM

2005-2006 – Symmetrix DMX-3 – Dynamic Cache Partitioning, Symmetrix Priority Controls, Virtual LUN Technology, Symmetrix Service Credential, Tamper Proof Audit Logs, Secure Data Eraser, RAID 6 Protections.

2007- 2008 Symmetrix DMX-4 – 4GB/s Point to point Backend, FC & SATA Intermix, RSA enVision Intergration, Flash Drives, Virtual Provisioning Cascaded SRDF.

There are two primary ways to reduce power consumption by carefully configuring the storage array itself and by taking advantage of EMC tools. Useful tips that will help to design an efficient DMX-3 array, including:

1) Minimizing DA pairs required.

2) Using more daisy chain storage bays to obtain needed capacity with fewer DA pairs.

3) Fully loading drive enclosures with drives (15) to reduce excess power overhead from cooling, logic, and power supply load efficiency.

4) Fully populating your DA pairs before adding additional pairs.

5) Ordering storage bays in increments of half to fully utilize enclosures.

6) Using larger capacity drives to reduce spindles.

7) Using tiered storage to reduce the number of higher speed drives when requirements allow.

8) Using RAID 5 or other as opposed to RAID 1 full mirroring.

9) Adding incremental storage bays and DA pairs as demand changes.

10) Using shorter loops for high performance drives, longer loops for lower performance drives.

There are other tools and techniques available as well, including:

11) Using Symmetrix Optimizer to balance performance and create opportunities for using larger capacity drives.Using Snaps instead of full volume copies to conserve space and use fewer drives.

For best performance with most applications, each SP should have its maximum amount of cache memory and you should use the default settings for the cache properties. Analyzer shows how the cache affects the storage system, and lets you tune the cache properties to best suit your application.

A storage-system cache has two parts: a read cache and a write cache. The read cache uses a read-ahead mechanism that lets the storage system prefetch data from the disk. Therefore the data will be ready in the cache when the application needs it. The write cache buffers and optimizes writes by absorbing peak loads, combining small writes, and eliminating rewrites.

You can change read cache size, write cache size, and cache page size to achieve optimal performance. The best sizes of the read and write caches depend on the read/write ratio. A general norm for the ratio of reads to writes is two reads per write; that is, reads represent 66 percent of all I/Os.

Since the contents of write cache are available for read operations as well, you should allocate most of the available SP memory to the write cache. However, since the write cache is flushed after a certain timeout period, a read cache is also required to hold active data for longer periods of time.

Read cache size

The read cache holds data that is expected to be accessed in the near future. If a request for data that is in the cache arrives, the request can be serviced from the cache faster than from the disks. Each request satisfied from cache eliminates the need for a disk access, reducing disk load. If the workload exhibits a “locality of reference” behavior, where a relatively small set of data is


accessed frequently and repeatedly, the read cache can improve performance. In read-intensive environments, where more than 70 percent of all requests are reads, the read cache should be large enough to accommodate the dataset that is most frequently accessed. For sequential reads from a LUN, data that is expected to be accessed by subsequent read requests is read (prefetched) into the cache before being requested. Therefore, for optimal performance, the read cache should be large enough to accommodate prefetched data for sequential reads from each LUN.

Write cache size

Write cache serves as a temporary buffer where data is stored temporarily before it is written to the disks. Cache writes are far faster than disk writes. Also, write-cached data is consolidated into larger I/Os when possible, and written to the disks more efficiently. (This reduces the expensive small writes in case of RAID 5 LUNs.) Also, in cases where data is modified frequently, the data is overwritten in the cache and written to the disks only once for several updates in the cache. This reduces disk load. Consequently, the write cache absorbs write data during heavy load periods and writes them to the disks, in an optimal fashion, during light load periods. However, if the amount of write data during an I/O burst exceeds the write cache size, the cache fills. Subsequent requests must wait for cached data to be flushed and for cache pages to become available for writing new data.

The write cache provides sustained write speed by combining sequential RAID 5 write operations and writing them in RAID 3 mode. This eliminates the need to read old data and parity before writing the new data. To take advantage of this feature, the cache must have enough space for one entire stripe of sequential data (typically 64 KB x [number-of-disks -1], or, for a five-disk group, 256 KB) before starting to flush. Note that the sequential stream can be contained in other streams of sequential or random data.

Cache page size

This can be 2, 4, 8, or 16 KB. As a general guideline, EMC suggest 8 KB. The ideal cache page size depends on the operating system and application. Analyzer can help you decide which size performs best.

About Me

My photo
Sr. Solutions Architect; Expertise: - Cloud Design & Architect - Data Center Consolidation - DC/Storage Virtualization - Technology Refresh - Data Migration - SAN Refresh - Data Center Architecture More info:- diwakar@emcstorageinfo.com
Blog Disclaimer: “The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.”
EMC Storage Product Knowledge Sharing