Showing posts with label CX-3. Show all posts
Showing posts with label CX-3. Show all posts

For best performance with most applications, each SP should have its maximum amount of cache memory and you should use the default settings for the cache properties. Analyzer shows how the cache affects the storage system, and lets you tune the cache properties to best suit your application.

A storage-system cache has two parts: a read cache and a write cache. The read cache uses a read-ahead mechanism that lets the storage system prefetch data from the disk. Therefore the data will be ready in the cache when the application needs it. The write cache buffers and optimizes writes by absorbing peak loads, combining small writes, and eliminating rewrites.

You can change read cache size, write cache size, and cache page size to achieve optimal performance. The best sizes of the read and write caches depend on the read/write ratio. A general norm for the ratio of reads to writes is two reads per write; that is, reads represent 66 percent of all I/Os.

Since the contents of write cache are available for read operations as well, you should allocate most of the available SP memory to the write cache. However, since the write cache is flushed after a certain timeout period, a read cache is also required to hold active data for longer periods of time.

Read cache size

The read cache holds data that is expected to be accessed in the near future. If a request for data that is in the cache arrives, the request can be serviced from the cache faster than from the disks. Each request satisfied from cache eliminates the need for a disk access, reducing disk load. If the workload exhibits a “locality of reference” behavior, where a relatively small set of data is


accessed frequently and repeatedly, the read cache can improve performance. In read-intensive environments, where more than 70 percent of all requests are reads, the read cache should be large enough to accommodate the dataset that is most frequently accessed. For sequential reads from a LUN, data that is expected to be accessed by subsequent read requests is read (prefetched) into the cache before being requested. Therefore, for optimal performance, the read cache should be large enough to accommodate prefetched data for sequential reads from each LUN.

Write cache size

Write cache serves as a temporary buffer where data is stored temporarily before it is written to the disks. Cache writes are far faster than disk writes. Also, write-cached data is consolidated into larger I/Os when possible, and written to the disks more efficiently. (This reduces the expensive small writes in case of RAID 5 LUNs.) Also, in cases where data is modified frequently, the data is overwritten in the cache and written to the disks only once for several updates in the cache. This reduces disk load. Consequently, the write cache absorbs write data during heavy load periods and writes them to the disks, in an optimal fashion, during light load periods. However, if the amount of write data during an I/O burst exceeds the write cache size, the cache fills. Subsequent requests must wait for cached data to be flushed and for cache pages to become available for writing new data.

The write cache provides sustained write speed by combining sequential RAID 5 write operations and writing them in RAID 3 mode. This eliminates the need to read old data and parity before writing the new data. To take advantage of this feature, the cache must have enough space for one entire stripe of sequential data (typically 64 KB x [number-of-disks -1], or, for a five-disk group, 256 KB) before starting to flush. Note that the sequential stream can be contained in other streams of sequential or random data.

Cache page size

This can be 2, 4, 8, or 16 KB. As a general guideline, EMC suggest 8 KB. The ideal cache page size depends on the operating system and application. Analyzer can help you decide which size performs best.

The Storage Processor (SP) processes all I/Os, host requests, management and maintenance tasks, as well as operations related to replication or migration features.

In Navisphere Analyzer, the statistics for an SP are based on the I/O workload from its attached hosts. It reflects the overall performance of CLARiiON storage system. The following Performance Metrics will be monitored for each CLARiiON storage system.

A LUN is an abstract object whose performance depends on various factors. The primary consideration is whether a host I/O can be satisfied by the cache. A cache hit does not require disk access; a cache miss requires one or more disk accesses to complete the data request.

As the slowest devices in a storage system, disk drives are often responsible for performance-related issues. Therefore, we recommend that you pay close attention to disk drives when analyzing performance problems.

SP performance metrics

Utilization (%)

The percentage of time during which the SP is servicing any request.

Total Throughput (I/O/sec)

The average number of host requests that are passed through the SP per second, including both read and write requests.


Read Throughput (I/O/sec)

The average number of host read requests that are passed through the SP per second.

Write Throughput (I/O/sec)

The average number of host write requests that are passed through the SP per second.

Read Bandwidth (MB/s)

The average amount of host read data in Mbytes that is passed through the SP per second.

Write Bandwidth (MB/s)

The average amount of host write data in Mbytes that is passed through the SP per second.

LUN performance metrics

Response Time (ms)

The average time, in milliseconds, that a request to a LUN is outstanding, including waiting time.

Total Throughput (I/O/sec)

The average number of host requests that are passed through the LUN per second, including both read and write requests.

Read Throughput (I/O/sec)

The average number of host read requests passed through the LUN per second.

Write Throughput (I/O/sec)

The average number of host write requests passed through the LUN per second.

Read Bandwidth (MB/s)

The average amount of host read data in Mbytes that is passed through the LUN per second.

Write Bandwidth (MB/s)

The average amount of host write data in Mbytes that is passed through the LUN per second.

Average Busy Queue Length

The average number of outstanding requests when the LUN was busy. This does not include idle time.

Utilization (%)

The fraction of an observation period during which a LUN has any outstanding requests.

DISK performance metrics

Utilization (%)

The percentage of time that the disk is servicing requests.

Response Time (ms)

The average time, in milliseconds, that it takes for one request to pass through the disk, including any waiting time.

Total Throughput (I/O/sec)

The average number of requests to the disk on a per second basis. Total throughput includes both read and write requests.

Read Throughput (I/O/sec)

The average number of read requests to the disk per second.

Write Throughput (I/O/sec)

The average number of write requests to the disk per second.

Read Bandwidth (MB/s)

The average amount of data read from the disk in Mbytes per second.

Write Bandwidth (MB/s)

The average amount of data written to the disk in Mbytes per second.

Average Busy Queue Length

The average number of requests waiting at a busy disk to be serviced,

including the request that is currently in service.

CLARiiON SP, LUN and DISK performance data is retrieved and processed daily. Raw performance data is kept for a longer term, i.e. 180 days, and CLARiiON performance reports are kept indefinitely for performance trend analysis.


What are the CLARiiON SAN fan-in and fan-out configuration rules?"

Fan-In Rule: A server can be zoned to a maximum of four storage systems.

Fan-Out Rule:

  • For FC5300 with Access Logix software - 1 - 4 servers (eight initiators) to 1 storage system.
  • For FC4500 with Access Logix - 15 servers to 1 storage system; each server with a maximum of one (single) path to an SP.
  • For FC4700 with Base or Access Logix software 8.42.xx or higher - 32 initiators per SP port for a maximum of 128 initiators per FC4700. Each port on each SP supports 32 initiators. Ports 0 and 1 on each SP in a FC4700 handles server connections. Port 1 on each SP in a FC4700 with MirrorView also handles remote mirror connections. In a remote mirror configuration, each path between SP A port 1 on one storage system and SP A port 1 on another storage system counts as one initiator for each port 1. Likewise, each path between SP B port 1 on one storage system and SP B port 1 on another storage system counts as one initiator for each port 1.
  • For FC4700 with Base or Access Logix software 8.41.xx or lower - 15 servers to 1 storage system; each server with a maximum of one (single) path to an SP.
  • For CX200 - 15 initiators per SP, each with a maximum of one (single) path to an SP; maximum of 15 servers.
  • Fan-Out for CX300 - 64 initiators per SP for a maximum of 128 initiators per storage system.
  • For CX400 - 32 initiators per SP port for a maximum of 128 initiators per CX400. Each port on each SP supports 32 initiators. Ports 0 and 1 on each SP in a CX400 handles server connections. Port 1 on each SP in a CX400 with MirrorView also handles remote mirror connections. In a remote mirror configuration, each path between SP A port 1 on one storage system and SP A port 1 on another storage system counts as one initiator for each port 1. Likewise, each path between SP B port 1 on one storage system and SP B port 1 on another storage system counts as one initiator for each port 1.
  • Fan-Out CX500 - 128 initiators per SP and maximum of 256 initiators per CX500 available for server connections. Ports 0 and 1 on each SP handle server connections. Port 1 on each SP in a CX500 with MirrorView/A or MirrorView/S enabled also handles remote mirror connections. Each path used in a MirrorView or SAN Copy relationship between two storage system counts as an initiator for both storage systems.
  • For CX600 - 32 initiators per SP port and maximum of 256 initiators per CX600 available for server connections. Ports 0, 1, 2, and 3 on each SP in any CX600 handle server connections. Port 3 on each SP in a CX600 with MirrorView also handles remote mirror connections. In a remote mirror configuration, each path between SP-A port 3 on one storage system and SP-A port 3 on another storage system counts as one initiator for each port 3. Likewise, each path between SP-B port 3 on one storage system and SP-B port 3 on another storage system counts as one initiator for each port 3.
  • Fan-Out CX700 - 256 initiators per SP and maximum of 512 initiators per CX700 available for server connections. Ports 0, 1, 2, and 3 on each SP in any CX700 handle server connections. Port 3 on each SP in a CX700 with MirrorView/A or MirrorView/S enabled also handles remote mirror connections. Each path used in a MirrorView or SAN Copy relationship between two storage system counts as an initiator for both storage systems
  • An initiator is any device with access to an SP port. Each port on each SP supports 32 initiators. Check with your support provider to confirm that the above rules are still in effect.

About Me

My photo
Sr. Solutions Architect; Expertise: - Cloud Design & Architect - Data Center Consolidation - DC/Storage Virtualization - Technology Refresh - Data Migration - SAN Refresh - Data Center Architecture More info:- diwakar@emcstorageinfo.com
Blog Disclaimer: “The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.”
EMC Storage Product Knowledge Sharing