WINDOWS 1002000 PROFESSIONAL RESOURCE KIT [Electronic resources] نسخه متنی

اینجــــا یک کتابخانه دیجیتالی است

با بیش از 100000 منبع الکترونیکی رایگان به زبان فارسی ، عربی و انگلیسی

WINDOWS 1002000 PROFESSIONAL RESOURCE KIT [Electronic resources] - نسخه متنی

Chris Aschauer

| نمايش فراداده ، افزودن یک نقد و بررسی
افزودن به کتابخانه شخصی
ارسال به دوستان
جستجو در متن کتاب
بیشتر
تنظیمات قلم

فونت

اندازه قلم

+ - پیش فرض

حالت نمایش

روز نیمروز شب
جستجو در لغت نامه
بیشتر
لیست موضوعات
توضیحات
افزودن یادداشت جدید







Evaluating Cache and Disk Usage by Applications


If you are an application developer, you might want to know if your programs read and write data efficiently to and from the disk, as well as how they utilize locality and manage the file-system cache. This section provides information to help you identify situations in which you can improve the I/O performance of applications.

Random and Sequential Data Access


Comparing random versus sequential operations is one way of assessing application efficiency in terms of disk use. Accessing data sequentially is much faster than accessing it randomly because of the way in which the disk hardware works. The seek operation, which occurs when the disk head positions itself at the right disk cylinder to access data requested, takes more time than any other part of the I/O process. Because reading randomly involves a higher number of seek operations than does sequential reading, random reads deliver a lower rate of throughput. The same is true for random writing. You might find it useful to examine your workload to determine whether it accesses data randomly or sequentially. If you find disk access is predominantly random, you might want to pay particular attention to the activities being done and monitor for the emergence of a bottleneck.

For workloads of either random or sequential I/O, use drives with faster rotational speeds. For workloads that are predominantly random I/O, use a drive with faster seek time.

For workloads that have high I/O rates, consider using striped volumes because they add physical disks, increasing the system's ability to handle concurrent disk requests. Notice, however, that striped volumes enabled in software can cause an increase in consumption of the processor. Hardware-enabled RAID volumes eliminate this impact on the processor but increase the consumption of processing cycles on the hardware RAID adapter.

NOTE


Even when an application reads records sequentially, if the file is fragmented throughout the disk or disks, the I/O will not be sequential. If the disk-transfer rate on a sequential or mostly sequential read operation deteriorates over time, run Disk Defragmenter on the disk and test again. When fragmentation occurs, data is not organized in contiguous clusters on the disk. Fragmentation slows performance because back-and-forth head movement is slow.

I/O Request Size


The size of requests and the rate at which they are sent are important for evaluating the way applications work with the disk. If you are an application developer, you can use the counters, such as Avg. Disk Bytes/Read, that reveal these types of information about I/O requests.

It is typically faster and more efficient to read a few large records than many small ones. However, transfer rates eventually peak due to the fact that the disk is moving blocks of data so large that each transfer occurs more slowly—although its total throughput is quite high. Unfortunately, it is not always easy to control this factor. However, if your system is used to transfer many small units of data, this inefficiency might help to explain, though not resolve, high disk use.

Requests need to be at least 8 kilobytes (KB), and, if possible, 64 KB. Sequential I/O requests of 2 KB consume a substantial amount of processor time, which affects overall system performance. However, if you can be sure that only 2 KB of data is necessary, doing a 2 KB I/O is the most efficient, because a larger I/O wastes direct memory access (DMA) controller bandwidth. As the record size increases, the throughput increases and the transfer rate falls because it takes fewer reads to move the same amount data.

Using 64 KB requests results in faster throughput with little processor time. Maximum throughput typically occurs at 64 KB, although some devices might have a higher maximum throughput size. When transferring data blocks greater than 64 KB, the I/O subsystem breaks the transfers into 64-KB blocks. Above 64 KB, the transfer rate drops sharply, and throughput levels off. Processor use and interrupts also appear to level off at 64 KB.

Investigating Disk Usage by Applications


Applications rarely read or write directly to disk. Instead, application code and data is typically mapped into the file system cache and copied from there into the working set of the application. When the application creates or changes data, the data is mapped into the cache and is then written back to the disk in batches. The disk is used only when an application requests a single write-through to disk or it instructs the file system not to use the cache at all for a file, usually because it is doing its own buffering. For this reason, tracking the cache and memory counters provides a way of investigating disk usage by your application. You can find information about monitoring cache and memory counters in "Evaluating Memory and Cache Usage" earlier in this book.

When monitoring disk usage by applications, you might find that applications that submit all I/O requests simultaneously tend to produce exaggerated values for the % Disk Time, % Disk Read Time, % Disk Write Time, and Avg. Disk sec/Transfer counters. Although throughput might be the same for applications that submit I/O requests intermittently, the values of counters that time requests will be much lower. It is important to understand your applications and factor their I/O methods into your analysis.

If you are writing your own tools to test disk performance, you might want to include the FILE_FLAG_NO_BUFFERING parameter in the open call for your test files. This instructs the Virtual Memory Manager to bypass the cache and go directly to disk.

/ 335