Showing posts with label Cache. Show all posts
Showing posts with label Cache. Show all posts

For best performance with most applications, each SP should have its maximum amount of cache memory and you should use the default settings for the cache properties. Analyzer shows how the cache affects the storage system, and lets you tune the cache properties to best suit your application.

A storage-system cache has two parts: a read cache and a write cache. The read cache uses a read-ahead mechanism that lets the storage system prefetch data from the disk. Therefore the data will be ready in the cache when the application needs it. The write cache buffers and optimizes writes by absorbing peak loads, combining small writes, and eliminating rewrites.

You can change read cache size, write cache size, and cache page size to achieve optimal performance. The best sizes of the read and write caches depend on the read/write ratio. A general norm for the ratio of reads to writes is two reads per write; that is, reads represent 66 percent of all I/Os.

Since the contents of write cache are available for read operations as well, you should allocate most of the available SP memory to the write cache. However, since the write cache is flushed after a certain timeout period, a read cache is also required to hold active data for longer periods of time.

Read cache size

The read cache holds data that is expected to be accessed in the near future. If a request for data that is in the cache arrives, the request can be serviced from the cache faster than from the disks. Each request satisfied from cache eliminates the need for a disk access, reducing disk load. If the workload exhibits a “locality of reference” behavior, where a relatively small set of data is


accessed frequently and repeatedly, the read cache can improve performance. In read-intensive environments, where more than 70 percent of all requests are reads, the read cache should be large enough to accommodate the dataset that is most frequently accessed. For sequential reads from a LUN, data that is expected to be accessed by subsequent read requests is read (prefetched) into the cache before being requested. Therefore, for optimal performance, the read cache should be large enough to accommodate prefetched data for sequential reads from each LUN.

Write cache size

Write cache serves as a temporary buffer where data is stored temporarily before it is written to the disks. Cache writes are far faster than disk writes. Also, write-cached data is consolidated into larger I/Os when possible, and written to the disks more efficiently. (This reduces the expensive small writes in case of RAID 5 LUNs.) Also, in cases where data is modified frequently, the data is overwritten in the cache and written to the disks only once for several updates in the cache. This reduces disk load. Consequently, the write cache absorbs write data during heavy load periods and writes them to the disks, in an optimal fashion, during light load periods. However, if the amount of write data during an I/O burst exceeds the write cache size, the cache fills. Subsequent requests must wait for cached data to be flushed and for cache pages to become available for writing new data.

The write cache provides sustained write speed by combining sequential RAID 5 write operations and writing them in RAID 3 mode. This eliminates the need to read old data and parity before writing the new data. To take advantage of this feature, the cache must have enough space for one entire stripe of sequential data (typically 64 KB x [number-of-disks -1], or, for a five-disk group, 256 KB) before starting to flush. Note that the sequential stream can be contained in other streams of sequential or random data.

Cache page size

This can be 2, 4, 8, or 16 KB. As a general guideline, EMC suggest 8 KB. The ideal cache page size depends on the operating system and application. Analyzer can help you decide which size performs best.

If you open Navisphere Manager and select any Frame/Array and click properties of Array, you will see a cache tab, which will give you cache configuration. There you need to setup configuration like LOW watermark, Hight Watermark. Did you think how CLARiiON behave on these percentage. Lets look close FLUSHING method what CLARiiON does:
There will many situation when CLARiiON Processor has to Cache Flushing to keep some free space in cache Memory.There are different size for cache memory for different series of CLARiiON.

There are three levels of flushing:
IDLE FLUSHING (LUN is not busy and user I/O continues)Idle flushing keeps some free space in write cache when I/O activity to a particular LUN is relatively low. If data immediacy were most important, idle flushing would be sufficient. If idle flushing cannot maintain free space, though, watermark flushing will be used.

WATERMARK FLUSHING The array allows the user to set two levels called watermarks: the High Water Mark (HWM) and the Low Water Mark (LWM). The base software tries to keep the number of dirty pages in cache between those two levels. If the number of dirty pages in write cache reaches 100%, forced flushing is used.

FORCED FLUSHING Forced flushes also create space for new I/Os, though they dramatically affect overall performance. When forced flushing takes place, all read and write operations are halted to clear space in the write cache. The time taken for a forced flush is very short (milliseconds), and the array may still deliver acceptable performance, even if the rate of forced flushes is in the 50 per second range.

About Me

My photo
Sr. Solutions Architect; Expertise: - Cloud Design & Architect - Data Center Consolidation - DC/Storage Virtualization - Technology Refresh - Data Migration - SAN Refresh - Data Center Architecture More info:- diwakar@emcstorageinfo.com
Blog Disclaimer: “The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.”
EMC Storage Product Knowledge Sharing