The Business Case for Storage Networks [Electronic resources] نسخه متنی

اینجــــا یک کتابخانه دیجیتالی است

با بیش از 100000 منبع الکترونیکی رایگان به زبان فارسی ، عربی و انگلیسی

The Business Case for Storage Networks [Electronic resources] - نسخه متنی

Bill Williams

| نمايش فراداده ، افزودن یک نقد و بررسی
افزودن به کتابخانه شخصی
ارسال به دوستان
جستجو در متن کتاب
بیشتر
تنظیمات قلم

فونت

اندازه قلم

+ - پیش فرض

حالت نمایش

روز نیمروز شب
جستجو در لغت نامه
بیشتر
لیست موضوعات
توضیحات
افزودن یادداشت جدید





Utilization and Yield


A fundamental piece of the storage TCO equation is utilization and its direct correlation to what can be referred to as the storage yield. If one assumes that the average company used at best 50 percent of their storage assets between 1999 and 2002 (which is itself a conservative number), then, based on the worldwide revenues shown in Table 1-2, we can estimate that over $35 billion dollars in storage assets went unutilized during that time.

Chapter 3, "Building a Value Case Using Financial Metrics." This material is required to build the financial models with which the business case for storage networks can be justified.

A close analysis of storage yield and the COPQ demonstrates how increased utilization helps lower the overall storage TCO.


The Cost of Poor Quality and the Storage Problem


[26]

A high COPQ implies higher manufacturing, operations, and labor costs, and consequently, lower revenues. Couching the value of an IT solution in terms of quality management, the COPQ can be said to be the dollar value of how a product, service, or solution performs relative to its expectations. In terms of financial analysis, this figure equates to a negative ROI.

Just as the buildup of IT capacity and subsequent downturn was the outcome of macroeconomic events, the move to storage networks is part of many corporations' efforts to raise their storage yield over time and lower the COPQ (and the TCO) for their storage infrastructure.


Storage Yield


In manufacturing operations, the term yield refers to the ratio of good output to gross output.[27] In storage operations as in manufacturing, the yield is never be 100 percent as there is always be some waste. The goal of a storage vision is to increase not only storage yields, which can be measured in dollars or percent of labor, but also to increase operational yields (or "good output") as much as possible. Ultimately, a storage vision built on a storage utility model helps increase a company's storage yield, the amount of storage capacity allocated and then used efficiently to create and sustain business value.

A tiered storage infrastructure is required to fully increase storage yield and gain true economies of scale. In Table 1-5, each tier has a different capability model and different direct and indirect costs associated with it. The goal is for the COPQ to be as insignificant as possible (shown here as a percentage of $1,000,000 in revenue), and ideally for the accompanying tiers to be appropriately matched to the level of business impact or business revenue of the associated applications. A typical tiered storage infrastructure might look something like this:

Tier OneMirrored, redundant storage devices with local and remote replication

Tier TwoRAID-protected, non-redundant storage devices with multiple paths

Tier ThreeNon-protected, non-redundant, near-line storage devices (for example, SATA drives used as a tape replacement)


Chapter 5.

Note

The difference between allocated and utilized storage is discussed in the section titled "Utilization."


Obstacles Inherent in DAS


As the predominant storage architecture to date in terms of terabytes deployed, DAS has served the storage needs for millions of environments around the globe. Small Computer Systems Interface (SCSI), DAS is a standard, reliable method of presenting disk to hosts. DAS also presents many challenges to the end user including failover and distance limitations, as well as the increased expense associated with poor utilization.


Failover Limitations

Although some DAS environments are Fibre Channel, large storage environments in open systems datacenters have historically been direct-attached SCSI. SCSI is a mainstream technology that has worked well and has been widely available since the early 1980s. SCSI provided the necessary throughput and was robust enough to get the job done. One disadvantage, however, has always been the inability of the UNIX operating system and most databases to tolerate disruptions in SCSI signals, thus limiting the capability to failover from one path to another without impact to the host. In addition, logical unit number (LUN) assignments are typically loaded into the UNIX kernel when the system is booted up, requiring allocation or de-allocation of storage from the host to be planned during an outage window. If the storage unit in question is shared between different clients with mismatched service-level agreements and different maintenance windows, then negotiating an outage window quickly becomes a hopelessly Sisyphean task.


Distance Limitations

Another significant factor hampering the flexibility of SCSI DAS is that SCSI is limited in its capability to transfer data over significant distances. High Voltage Differential (HVD) SCSI can carry data only up to 25 meters without the aid of SCSI extenders. This limitation presents difficulties for applications requiring long-distance transfer, whether for the purposes of disaster recovery planning, application latency, or just for the more physical logistics of datacenter planning.


Expense

Aside from the technical limitations of DAS, the primary drawback of DAS is, without a doubt, its expense. Ultimately, the storage frames themselves constitute a single point of failure, and to build redundancy into direct-attached systems, it is often necessary to mirror the entire frame, thereby doubling the capital costs of implementation and increasing the management overhead (and datacenter space) required to support the environment.

The expense of DAS also stems from poor utilization rates. A closer look at the two primary types of storage utilization further illustrates the nature of the cost savings inherent in networked storage solutions.


Utilization


[28]


Allocation Efficiency

Due to the physical constraints of the solution, DAS environments are intrinsically susceptible to low "allocation efficiency" rates that cost firms money in terms of unallocated or wasted storage. Let us look at one example of the financial impact of poor allocation efficiency.

Imagine a disk storage system (containing 96 73-GB disk drives) with six four-port SCSI (or Fibre Channel) adapters capable of supporting up to 24 single-path host connections. This system is capable of providing approximately 7008 GB of raw storage, or 3504 GB mirrored. Under most circumstances, hosts have at least two paths to disk, so this particular environment can support a maximum of twelve hosts. In a typical scenario, shown in Figure 1-5, this frame hosts the storage for a small server farm of six clustered hosts (12 nodes).


Figure 1-5. Sample DAS Configuration

[29]

Fred Moore of Horison Information Strategies has an even more dismal view of allocation efficiency. According to Moore, surveys of clients across various industries indicate allocation efficiencies of 3040 percent for UNIX and Linux environments and even less for Windows environments, which Moore says frequently see allocation efficiency rates as low as 20 percent.[30]

Using the same environment shown in Figure 1-5 as an example, if the allocation efficiency is only 50 percent, then the loss widens significantly to $350,400, or half the purchase price of the frame. Figure 1-6 shows the costs associated with poor utilization in this environment.


Figure 1-6. Utilization Rate and Associated CostsCash Basis

Most firms depreciate the cost of storage over the course of its useful life (assuming the storage is purchased and not leased), so the actual COPQ might vary according to depreciation schedules.

Given the rapid progress of technological advancement, in most cases, depreciation is carried out over three years. If the straight-line method of depreciation is used over a period of three years, the asset value or purchase price of the frame is divided by three with the assumption that one-third of its usefulness is consumed each year. The impact of the loss, or the COPQ, is then spread across the span of the economic usefulness of the asset. In other words, one third of the COPQ affects the firm's bottom line each year.

Low utilization does not increase or decrease the estimated life of the hardware, nor does this loss change the asset's value in accounting terms. Low utilization does, however, decrease the storage yield of the asset and increases the COPQ, which, in turn, increases the overall TCO. Regardless of the method of depreciation used, poor utilization detracts from the firm's bottom line.

Whether or not the storage units themselves are depreciated, the net effects of poor allocation efficiency are similar: Low allocation efficiency increases the rate of frequency of additional storage purchases. A real life parallel is buying a full tank of gas and being able to use only half of the purchased fuel. As long as you need to drive the car, you will need to purchase more fuel. If more fuel is not consumed, you will be forced to stop at the gas station more often.

Similarly, as long as the firm operates, it needs to purchase storage. The idea that a firm can delay purchasing storage indefinitely by constantly increasing the utilization rate is, to put it bluntly, misinformed. The long-term key to financial success in terms of storage management is optimizing storage usage to minimize the frequency and magnitude of storage purchases. A high allocation efficiency rate helps decrease the size and number of storage purchases, as does a high utilization efficiency rate.

Note

Capacity on-demand programs are alternative procurement strategies aimed at alleviating the frequency and number of storage purchases. Although these "pay-as-you-go" methods are quite successful at easing the purchase and planning process, they do little to address the rate of consumption or poor utilization found in many environments.


Utilization Efficiency

There might be environments in which the allocation efficiency is at a desirable rate, but the allocated storage is misused, unusable, abandoned, or even hoarded. This is what Toigo refers to as poor utilization efficiency, whereby the Table 1-1, DAS storage units made up nearly 70 percent of all storage sales in 2003 (with NAS and SAN storage together comprising approximately 30 percent). As these figures indicate, there is still a long way to go before the majority of storage environments currently deployed are networked storage solutions.

In addition to the recently installed DAS, a mountain of DAS that was purchased during the market upswing and it still carries a sizable net book value. As shown in Table 1-1, nearly one million DAS units were shipped between 2001 and 2003, indicating significant depreciation expense for customers when considering the corresponding low utilization rate (and the high COPQ) for DAS.

/ 131