Microsoft Windows Server 2003 Deployment Kit [Electronic resources] : Planning Server Deployments نسخه متنی

اینجــــا یک کتابخانه دیجیتالی است

با بیش از 100000 منبع الکترونیکی رایگان به زبان فارسی ، عربی و انگلیسی

Microsoft Windows Server 2003 Deployment Kit [Electronic resources] : Planning Server Deployments - نسخه متنی

Microsoft Corporation

| نمايش فراداده ، افزودن یک نقد و بررسی
افزودن به کتابخانه شخصی
ارسال به دوستان
جستجو در متن کتاب
بیشتر
تنظیمات قلم

فونت

اندازه قلم

+ - پیش فرض

حالت نمایش

روز نیمروز شب
جستجو در لغت نامه
بیشتر
توضیحات
افزودن یادداشت جدید





Planning for Fault Tolerance


Organizations are increasingly finding that any downtime that results in mission-critical data being unavailable is unacceptable. Windows Server 2003 offers several solutions to increase the availability of data. Solutions that use RAID and clustering technologies are especially suited to providing fault-tolerant storage. The process for analyzing and selecting among these solutions is illustrated in Figure 1.9.


Figure 1.9: Planning for Fault Tolerance



Achieving Fault Tolerance by Using RAID


RAID is commonly implemented for both performance and fault tolerance. With RAID, you can choose to assemble disks to provide fault tolerance, performance, or both, depending on the RAID level that you configure. Table 1.4 summarizes commonly available RAID levels.
























Table 1.4: RAID Comparison

RAID Level


Description


Minimum Disks Required


Effective Capacity


0


Disk Striping. Two or more disks appear to the operating system as a single disk. Data is striped across each disk during read/write operations. Potentially increases disk access speeds 2X or better. Not fault tolerant.


2


S*N

N = of disks in array

S = Size of smallest disk in array


1


Disk mirroring. Data is mirrored on two or more disks. Provides fault tolerance, but at a higher cost (space required is double the amount of data). Read performance is increased as well.


2


S

S = Size of smallest disk in array


0+1


Combines RAID 0 and RAID 1; offers the performance of RAID 0 and the protection of RAID 1.


4


S*N/M

N = of disks in array

S = Size of smallest disk in array

M = Number of mirror sets


5


Disk striping with parity. Provides slower performance than RAID 0, but provides fault tolerance. A single disk can be lost without any data loss. Parity bits are distributed across all disks in the array.


3


S*(N-1)

N = of disks in array

S = Size of smallest disk in array



From a design perspective, your choice of a RAID solution should be dictated by the type of data being stored. Although RAID 0 offers the fastest read and write performance, it does not offer any fault tolerance, so that if a single disk in a RAID 0 array is lost, all data is lost and will need to be recovered from backup. This might be a good choice for high performance workstations, but might not be suited to mission-critical servers.

RAID 1 allows you to configure two or more disks to mirror each other. This configuration produces slow writes, but relatively quick reads, and provides a means to maintain high data availability on servers, because a single disk can be lost without any loss of data. When more than two disks make up the mirror, the RAID 1 array can lose multiple disks so long as a complete mirrored pair is not lost. When planning a RAID 1 solution, remember that the amount of physical disk space required is twice the space required to store the data.

RAID 0+1 combines the performance benefit of striping with the fault tolerance of mirroring. Compared to RAID 0, writes are slower, but reads are equally fast. Compared to RAID 1, RAID 0+1 offers faster writes and reads but also requires additional storage to create the mirrored stripe sets. This configuration is often ideal for mission-critical database storage, because it offers both fast read access and fault tolerance.

RAID 5 provides fault tolerance: you can lose a single disk in an array with no loss of data. However, RAID 5 operates much more slowly than RAID 0 because a parity bit must be calculated for all write operations. RAID-5 volumes are well suited for reads and also work well in the following situations:



In large query or database mining applications where reads occur much more often than writes. Performance degrades as the percentage of write operations increases. Database applications that read randomly work well with the built-in load balancing of a RAID 5 volume.



Where a high degree of fault tolerance is required without the cost of the additional disk space needed for a RAID 1 volume. A RAID 5 volume is significantly more efficient than a mirrored volume when larger numbers of disks are used. The space required for storing the parity information is equivalent to 1/Number of disks, so a 10-disk array uses 1/10 of its capacity for parity information. The disk space that is used for parity decreases as the number of disks in the array increases.



Choosing Between Hardware and Software RAID


An additional consideration with RAID implementations is the choice between hardware-based and software-based RAID. With hardware RAID, a hardware RAID controller allows you to configure the RAID level of attached disks. With software RAID, the operating system manages the RAID configuration, along with data reads and writes.

Windows Server 2003 supports the following software RAID types:



RAID 0: Up to 32 disks striped



RAID 1: Two disks mirrored



RAID 5: Up to 32 disks striped with parity




To configure software RAID, a disk must be configured as a dynamic disk. Although software RAID has lower performance than hardware RAID, software RAID is inexpensive and easy to configure because it has no special hardware requirements other than multiple disks. If cost is more important than performance, software RAID is appropriate. If you plan to use software RAID for write-heavy workloads, use RAID-1 instead of RAID-5. Using software-based RAID with Windows Server 2003 dynamic disks is a good choice for providing fault tolerance using SCSI or EIDE disks to non-mission-critical servers that require fault tolerance and can accommodate the added CPU load imposed by software RAID. This solution is ideal for small to medium organizations that need to add a level of fault tolerance while avoiding the cost of hardware RAID controllers.

For more information about software RAID and disk management, see the Server Management Guide of the Windows Server 2003 Resource Kit (or see the Server Management Guide on the Web at http://www.microsoft.com/reskit).

Using Dynamic Disks with Hardware RAID


You can also use dynamic disks with hardware-based RAID solutions. Using dynamic disks with hardware RAID can be useful in the following situations:



You want create a large volume by using software RAID, such as RAID-0, across hardware RAID LUNs.



You want to extend a volume, but the underlying hardware cannot dynamically increase the size of LUNs.



You want to extend a volume, but the hardware has reached its maximum LUN size.



Before converting hardware RAID disks to dynamic disks, review the following restrictions:



You cannot use dynamic disks on shared cluster storage. However, you can use the DiskPart command-line tool to extend basic volumes on shared cluster storage. For more information, see "Extend a basic volume" in Help and Support Center for Windows Server 2003.



If you create a software RAID-0 volume across multiple hardware arrays, you cannot later extend the RAID-0 volume to increase its size. If you anticipate needing to extend the volume, create a spanned volume instead.



Preparing to Upgrade Servers That Contain Multidisk Fault-Tolerant Volumes


Servers running Microsoft Windows NT Server version 4.0 can contain volume sets, mirror sets, stripe sets, and stripe sets with parity created by using the fault-tolerant driver Ftdisk.sys. To encourage administrators to begin using dynamic volumes, Windows 2000 offers limited support for Ftdisk volumes. Completing this transition, Windows Server 2003 does not support multidisk volumes. If you plan to upgrade a server that contains multidisk volumes, review the following issues:



If you are upgrading from Windows NT Server version 4.0 to Windows Server 2003, back up and then delete all multidisk volumes before upgrading. This is necessary because Windows Server 2003 cannot access these volumes. Be sure to verify that your backup was successful before deleting the volumes. After you finish upgrading to Windows Server 2003, create new dynamic volumes, and then restore the data from your backup.




If you are upgrading from Windows NT Server 4.0 to Windows Server 2003, and the paging file is located on a multidisk volume, you must use System in Control Panel to move the paging file to a primary partition or logical drive before beginning Setup. For more information about moving the paging file, see article 123747, "Moving the Windows Default Paging and Spool File." To find this article, see the Microsoft Knowledge Base link on the Web Resources page at http://www.microsoft.com/windows/reskits/webresources.



If you are upgrading from Windows 2000 Server to Windows Server 2003, you must use the Disk Management snap-in to convert all basic disks that contain multidisk volumes to dynamic disks before beginning Setup. If you do not do this, Setup does not continue. For information about converting basic disks to dynamic disks, see "Change a basic disk into a dynamic disk" in Help and Support Center for Windows Server 2003.



For more information about multidisk volumes, see the Server Management Guide of the Windows Server 2003 Resource Kit (or see the Server Management Guide on the Web at http://www.microsoft.com/reskit).


Achieving Fault Tolerance by Using Clustering


Many organizations require that critical data be continuously available. Cluster technology provides a means of configuring storage to help meet that goal. Simply put, a cluster is two or more computer systems that act and are managed as one. Clients access the cluster by using a single host name or IP address; their request is answered by one of the systems in the cluster.

The purpose of cluster technology is to eliminate single points of failure. When availability of data is your paramount consideration, clustering is ideal. Using a cluster avoids all of these single points of failure:



Network card failure



Processor failure



Motherboard failure



Power failure



Cable failure



Storage adapter failure



With a cluster, you can essentially eliminate nearly any hardware failure associated with using a single computer. If hardware associated with one system fails, the other system automatically takes over. Two types of clustering solutions that accomplish this are server clusters and Network Load Balancing clusters. Both types of clustering are available on Windows Server 2003, Enterprise Edition and Windows Server 2003, Datacenter Edition. In addition, Network Load Balancing clusters are available on Windows Server 2003, Web Edition and Windows Server 2003, Standard Edition.


Server Clusters


Server clusters are often implemented to offer high availability solutions to applications that need both read and write access to data, such as database, e-mail, and file servers. Server clusters can be configured with up to eight computers, or nodes, participating in the cluster. To share the same data source, server cluster nodes connect to external disk arrays by using either a SCSI or Fibre Channel connection. Fibre Channel is required for interconnecting clusters of three or more nodes to shared storage. For the 64-bit versions of Windows Server 2003, you must always use Fibre Channel hardware to connect the nodes to shared storage.

When planning to deploy server clusters in your storage solution, you must take into account the following considerations:



The boot and system disks of each cluster node must not be located on the same storage bus as the shared storage devices, unless you use a Storport driver for your HBAs.



Shared cluster disks cannot be configured as dynamic disks.



Shared cluster disks must be formatted as basic disks with the NTFS file system.



For the 64-bit versions of the Windows Server 2003 family, the shared cluster disks must be partitioned as master boot record (MBR) and not as GUID partition table (GPT) disks.



You cannot use Remote Storage with shared cluster storage.



You should not enable write caching on shared cluster disks unless they are logical units on an external RAID subsystem that has proper power protection (such as multiple power supplies, multiple feeds from the power grid, or adequate battery backup).



Because cluster disks must be basic disks, you cannot use software RAID. For disk fault tolerance, you must use a hardware-based RAID solution.



For more information about server clusters, see "Designing and Deploying Server Clusters" in this book.

Network Load Balancing Clusters


Network Load Balancing clusters maintain their own local copy of data and are ideal for load balancing access to static data, such as Web pages. Up to 32 computers can participate in a Network Load Balancing cluster. Because they manage their own local data, Network Load Balancing clusters are much easier to plan and implement. By using the Network Load Balancing Manager, you can quickly configure all Network Load Balancing clusters in your enterprise from a single server.

Using Network Load Balancing clusters is the best choice for several data availability needs. For any server that has difficulty meeting the load demands of its clients, Network Load Balancing is an ideal solution, and is commonly used to provide fault tolerance and load balancing for:



Web Servers



FTP Servers



Streaming Media Servers




VPN Servers



Terminal Servers



For each of these, Network Load Balancing is ideal, not only because it is easy to implement, but also because of how easily a Network Load Balancing cluster can scale as your company grows. Because of their simple scalability, your initial estimates of the number of servers you will require need not be perfect with Network Load Balancing clusters. As the load on a Network Load Balancing cluster grows, you can balance the increased load by simply adding additional hosts to the cluster.

Because each Network Load Balancing cluster host maintains its own local copy of storage, storage planning with Network Load Balancing clusters is not as complex as with server clusters. Many of the disk restrictions of server clusters do not apply to Network Load Balancing clusters. The general storage considerations for Network Load Balancing cluster planning are:



Network Load Balancing cluster hosts can use any local storage space, including space on boot or system volumes.



Local storage can consist of basic and dynamic disks.



Hardware or software RAID can be used to add additional fault tolerance. If the Network Load Balancing cluster services a high level of traffic and you need disk fault tolerance, you must use hardware RAID.



For more information about Network Load Balancing clusters, see "Designing Network Load Balancing" and "Deploying Network Load Balancing" in this book.

/ 122