Utilization and Yield
A fundamental piece of the storage TCO equation is utilization and its direct correlation to what can be referred to as the storage yield. If one assumes that the average company used at best 50 percent of their storage assets between 1999 and 2002 (which is itself a conservative number), then, based on the worldwide revenues shown in Table 1-2, we can estimate that over $35 billion dollars in storage assets went unutilized during that time.Chapter 3, "Building a Value Case Using Financial Metrics." This material is required to build the financial models with which the business case for storage networks can be justified.A close analysis of storage yield and the COPQ demonstrates how increased utilization helps lower the overall storage TCO.
The Cost of Poor Quality and the Storage Problem
[26]A high COPQ implies higher manufacturing, operations, and labor costs, and consequently, lower revenues. Couching the value of an IT solution in terms of quality management, the COPQ can be said to be the dollar value of how a product, service, or solution performs relative to its expectations. In terms of financial analysis, this figure equates to a negative ROI.Just as the buildup of IT capacity and subsequent downturn was the outcome of macroeconomic events, the move to storage networks is part of many corporations' efforts to raise their storage yield over time and lower the COPQ (and the TCO) for their storage infrastructure.
Storage Yield
In manufacturing operations, the term yield refers to the ratio of good output to gross output.[27] In storage operations as in manufacturing, the yield is never be 100 percent as there is always be some waste. The goal of a storage vision is to increase not only storage yields, which can be measured in dollars or percent of labor, but also to increase operational yields (or "good output") as much as possible. Ultimately, a storage vision built on a storage utility model helps increase a company's storage yield, the amount of storage capacity allocated and then used efficiently to create and sustain business value.A tiered storage infrastructure is required to fully increase storage yield and gain true economies of scale. In Table 1-5, each tier has a different capability model and different direct and indirect costs associated with it. The goal is for the COPQ to be as insignificant as possible (shown here as a percentage of $1,000,000 in revenue), and ideally for the accompanying tiers to be appropriately matched to the level of business impact or business revenue of the associated applications. A typical tiered storage infrastructure might look something like this:Tier OneMirrored, redundant storage devices with local and remote replicationTier TwoRAID-protected, non-redundant storage devices with multiple pathsTier ThreeNon-protected, non-redundant, near-line storage devices (for example, SATA drives used as a tape replacement)
Chapter 5.NoteThe difference between allocated and utilized storage is discussed in the section titled "Utilization."
Obstacles Inherent in DAS
As the predominant storage architecture to date in terms of terabytes deployed, DAS has served the storage needs for millions of environments around the globe. Small Computer Systems Interface (SCSI), DAS is a standard, reliable method of presenting disk to hosts. DAS also presents many challenges to the end user including failover and distance limitations, as well as the increased expense associated with poor utilization.
Failover Limitations
Although some DAS environments are Fibre Channel, large storage environments in open systems datacenters have historically been direct-attached SCSI. SCSI is a mainstream technology that has worked well and has been widely available since the early 1980s. SCSI provided the necessary throughput and was robust enough to get the job done. One disadvantage, however, has always been the inability of the UNIX operating system and most databases to tolerate disruptions in SCSI signals, thus limiting the capability to failover from one path to another without impact to the host. In addition, logical unit number (LUN) assignments are typically loaded into the UNIX kernel when the system is booted up, requiring allocation or de-allocation of storage from the host to be planned during an outage window. If the storage unit in question is shared between different clients with mismatched service-level agreements and different maintenance windows, then negotiating an outage window quickly becomes a hopelessly Sisyphean task.
Distance Limitations
Another significant factor hampering the flexibility of SCSI DAS is that SCSI is limited in its capability to transfer data over significant distances. High Voltage Differential (HVD) SCSI can carry data only up to 25 meters without the aid of SCSI extenders. This limitation presents difficulties for applications requiring long-distance transfer, whether for the purposes of disaster recovery planning, application latency, or just for the more physical logistics of datacenter planning.
Expense
Aside from the technical limitations of DAS, the primary drawback of DAS is, without a doubt, its expense. Ultimately, the storage frames themselves constitute a single point of failure, and to build redundancy into direct-attached systems, it is often necessary to mirror the entire frame, thereby doubling the capital costs of implementation and increasing the management overhead (and datacenter space) required to support the environment.The expense of DAS also stems from poor utilization rates. A closer look at the two primary types of storage utilization further illustrates the nature of the cost savings inherent in networked storage solutions.
Utilization
[28]
Allocation Efficiency
Due to the physical constraints of the solution, DAS environments are intrinsically susceptible to low "allocation efficiency" rates that cost firms money in terms of unallocated or wasted storage. Let us look at one example of the financial impact of poor allocation efficiency.Imagine a disk storage system (containing 96 73-GB disk drives) with six four-port SCSI (or Fibre Channel) adapters capable of supporting up to 24 single-path host connections. This system is capable of providing approximately 7008 GB of raw storage, or 3504 GB mirrored. Under most circumstances, hosts have at least two paths to disk, so this particular environment can support a maximum of twelve hosts. In a typical scenario, shown in Figure 1-5, this frame hosts the storage for a small server farm of six clustered hosts (12 nodes).
Figure 1-5. Sample DAS Configuration

Figure 1-6. Utilization Rate and Associated CostsCash Basis

Utilization Efficiency
There might be environments in which the allocation efficiency is at a desirable rate, but the allocated storage is misused, unusable, abandoned, or even hoarded. This is what Toigo refers to as poor utilization efficiency, whereby the Table 1-1, DAS storage units made up nearly 70 percent of all storage sales in 2003 (with NAS and SAN storage together comprising approximately 30 percent). As these figures indicate, there is still a long way to go before the majority of storage environments currently deployed are networked storage solutions.In addition to the recently installed DAS, a mountain of DAS that was purchased during the market upswing and it still carries a sizable net book value. As shown in Table 1-1, nearly one million DAS units were shipped between 2001 and 2003, indicating significant depreciation expense for customers when considering the corresponding low utilization rate (and the high COPQ) for DAS.