Determining the Core Specifications in the NLB Design
After you identify applications that can benefit from Network Load Balancing, you are ready to design the core specifications for your Network Load Balancing design. These core specifications form the foundation on which you create your cluster. They include the design process steps that are required for all Network Load Balancing solutions. Figure 8.5 illustrates the current step in the process for creating your Network Load Balancing design. The steps that occur later in the design process depend on the design decisions that you make about these essential aspects of your design.

Figure 8.5: Determining the Core Specifications in the NLB Design
Combining Applications on the Same Cluster
When you create your Network Load Balancing design, one of the first steps is to determine if you can combine the applications and services in your solution on the same cluster. One of the primary concerns when combining applications on the same cluster is determining if the applications are compatible with each other. Table 8.1 lists the categories of common applications and services that run on Network Load Balancing, and it describes how you can combine them on the same cluster.
Application or Service | Combined on the Same Cluster |
---|---|
IIS 6.0 Web applications | Can be combined on the same cluster; however, might require customized port rules. [1] |
Terminal Services | Can run any combination of applications as long as the applications are compatible with Terminal Services. Avoid combining Terminal Services (when Terminal Services is hosting applications) with other application platforms or services, such as IIS 6.0 or VPN remote access. [2] |
VPN remote access | Can be combined with ISA Server to combine remote access server and firewall features in the same server. Otherwise, avoid combining with other application platforms and services, such as IIS 6.0. |
ISA Server | Can be combined with Routing and Remote Access to combine remote access server and firewall features. Can be combined with IIS 6.0 to provide default Web site redirection to a Web site that is local to the server running ISA Server and IIS 6.0. |
Custom applications | Can be combined on the same cluster; however, might require customized port rules.[1] |
[1]For more information, see "Identifying Applications or Services That Require Custom Port Rules" later in this chapter.[2]If you are not hosting applications with Terminal Services, you can combine Terminal Services with MS 6.0 Web applications, VPN remote access servers, ISA Server, and your custom applications to provide remote administration of those servers. |
As mentioned in Table 8.1, avoid running some applications and services, such as Terminal Services and VPN remote access, on the same cluster. The reasons for not running these applications and services on the same cluster include system resource constraints, security constraints, and ease of management.
In some instances, the cluster hosts might not have sufficient system resources to run the combined applications. For example, if you use Terminal Services to host applications, Terminal Services consumes a significant amount of processor and memory resources. Hosting applications with Terminal Services on the same cluster as IIS 6.0 might cause significant delays in running Web applications. For more information on scaling applications and services on Network Load Balancing clusters, see "Scaling NLB Solutions" later in this chapter.Also, security considerations might require running the applications and services on separate clusters. For example, you might want to avoid running an FTP site that allows anonymous access on a cluster that also runs a secured e-commerce Web application.Finally, combining applications on the same cluster can make the applications unmanageable and difficult to administer. The complexity of combining applications can correspondingly increase the complexity of managing and operating the cluster. For more information about creating clusters that are easy to manage and operate, see "Ensuring Ease of Cluster Management and Operations" later in this chapter.
Example: Combining Applications on the Same Cluster
A fictitious organization, Contoso, is in the process of restructuring its existing network infrastructure and application platforms. Contoso wants to reduce the number of Network Load Balancing clusters required to support its new network infrastructure and application platforms. The company plans to run the following applications and services on Network Load Balancing clusters:
A VPN remote access server farm, based on Routing and Remote Access
Three e-commerce Web applications, based on IIS 6.0One of the applications is based on static Hypertext Markup Language (HTML) content with Common Gateway Interface (CGI) scripts, another application is an ASP application, and the third application is an ASP.NET application.
A customer support FTP site, based on IIS 6.0The customer support FTP site supports the downloading of information and documents to customers and the uploading of customer documents, files, and other information to the FTP site. The customer support FTP site allows anonymous access for users who want to download and upload files.
For each of these applications and services, Contoso must determine if the application can be combined with other applications and services or if the application requires its own cluster. Table 8.2 lists the clusters required to support the applications and services and the reasons for including those clusters in the design.
Cluster Required | Applications Running on the Cluster | Reason for Inclusion |
---|---|---|
NLBCIusterA | VPN remote access server farm | VPN remote access servers should not be combined with other applications platforms or services. |
NLBCIusterB | E-commerce Web applications | The three e-commerce Web applications are compatible with one another. |
NLBCIusterC | Customer support FTP site | The customer support FTP site supports anonymous user access and cannot be combined with the e-commerce Web applications. |
Specifying Cluster and Cluster Host Parameters
For each Network Load Balancing cluster, there are settings that are common to all cluster hosts and there are other settings that are unique to each cluster host. The cluster parameters define the cluster and establish cluster-wide configurations, such as the virtual IP address assigned to the cluster. The cluster host parameters define the cluster host role and identity within the cluster, such as the cluster host's priority within the cluster or the dedicated IP address that is unique to the cluster host.Specify the cluster and cluster host parameters by completing the following steps:
Specify the settings that are common to all cluster hosts within the same cluster.
Specify the settings that are unique to each cluster host.
Note | For a Word document to assist you in documenting cluster and cluster host parameters, see "NLB Cluster Host Worksheet" (Sdcnlb_1.doc) on the Windows Server 2003 Deployment Kit companion CD (or see "NLB Cluster Host Worksheet" on the Web at http://www.microsoft.com/reskit). |
Specifying the Cluster Parameters
Cluster parameters are the configuration settings that define the cluster and the configuration settings common to all cluster hosts within the cluster. The cluster parameters must be unique within your organization's network.Specify the cluster parameters by completing the following steps:
Specify the cluster IP address.
Specify the cluster fully qualified domain name (FQDN).
Specify the cluster operation mode.
Specify the remote control settings.
Tip | When you configure cluster parameters by using Network Load Balancing Manager, enter the cluster parameters once during the creation of the cluster. As cluster hosts are added to the cluster, Network Load Balancing Manager automatically configures the cluster parameters for the new cluster hosts. When you configure them by using other methods, configure the cluster parameters identically for all cluster hosts within the same cluster. For more information about configuring the cluster parameters, see "Network Load Balancing parameters" in Help and Support Center for Windows Server 2003. |
Specifying the Cluster IP Address
The cluster IP address is the virtual IP address that is assigned to the cluster. Client requests are sent to the cluster IP address. The cluster IP address has a corresponding subnet mask that is part of the cluster IP address specifications. The criteria for specifying the cluster IP address and subnet mask are the same as for all other IP addresses. When you configure the cluster manually, you must configure the cluster IP address and subnet mask identically on all cluster hosts within the cluster.
Tip | The cluster IP address must appear in the list of IP addresses in the TCP/IP properties. Network Load Balancing Manager automatically configures the TCP/IP properties so that the cluster IP address is in the list. When you configure the TCP/IP properties by other methods, you must ensure that the cluster IP address is in the list of IP addresses in the TCP/IP properties. |
Specifying the Cluster FQDN
For each cluster, you must designate the FQDN to be assigned to the cluster. The FQDN is registered in DNS later in the deployment process, when the cluster is deployed and ready to service client requests. The cluster FQDN refers to the cluster as a whole.
Tip | You can enter the cluster FQDN in the full Internet name setting. The full Internet name is not automatically registered in DNS or used by other Windows components. As a result, treat the full Internet name setting as a comment that allows administrators to easily identify the cluster's FQDN. If you elect to leave the full Internet name setting blank, the operation of any Windows component, including Network Load Balancing, is not affected. |
In addition to the cluster FQDN, designate a FQDN for each application and service running on the cluster. Ultimately these FQDNs become DNS entries as you deploy the corresponding application or service.
Specify the Cluster Operation Mode
The cluster operation mode determines the method, unicast or multicast, that is used to propagate incoming client requests to all the cluster hosts. For more information about determining which method to select for the cluster, see "Selecting the Unicast or Multicast Method of Distributing Incoming Requests" later in this chapter.
Specify the Remote Control Settings
The remote control settings provide the ability to perform certain remote administration tasks, such as starting and stopping a cluster host, from the command line utility Nlb.exe. Because of security-related concerns, avoid enabling the remote control settings. Nlb.exe can only perform limited administrative tasks. Network Load Balancing Manager can perform all administrative tasks, and is the preferred method for administering clusters. Remotely administering a cluster by using Network Load Balancing Manager is not affected by the remote control settings. For more information about specifying the remote control settings, see "Securing NLB Solutions" later in this chapter.
Specifying the Cluster Host Parameters
The cluster host parameters are the configuration settings that define each cluster host, including the configuration settings that are unique to each cluster host within the cluster. The cluster host parameters must be unique within a cluster and within your organization's network.Specify the cluster host parameters by completing the following steps:
Specify the cluster host priority.
Specify the dedicated IP address configuration.
Specify the initial host state.
For more information about configuring the cluster host parameters, see "Network Load Balancing parameters" in Help and Support Center for Windows Server 2003.
Tip | Network Load Balancing Manager prevents common cluster host configuration errors, such as duplicate cluster host priorities or duplicate dedicated IP addresses. When you use other methods to configure the cluster host parameters, you must ensure that the cluster host parameters are configured appropriately for all cluster hosts within the cluster. |
Specifying the Cluster Host Priority
For each cluster host, you must specify a cluster host priority that is unique within the cluster. During cluster convergence, the cluster host with the lowest numeric value for the cluster host priority triggers the end of convergence. For example, if three cluster hosts have the priorities of 3, 16, and 22, 3 is the cluster host with the highest priority and it will trigger the end of convergence.Specify the cluster host priority for your cluster by using any unique identifier between 1 and 32. Any sequence can be used as long as the cluster host priorities are unique.
Note | If you specify the same cluster host priority for two cluster hosts, the last cluster host that starts fails to join the cluster. An error message describing the problem is written to the Windows system event log. The existing cluster hosts continue to operate as before. |
Specifying the Dedicated IP Address Configuration
The dedicated IP address is an IP address that is assigned to each cluster host for network traffic that is not associated with the cluster, such as Telnet access to a specific host within a cluster. This IP address is used to individually address each host in the cluster; therefore, it should be unique for each host. Enter this parameter in standard Internet dotted notation (for example, w.x.y.z).Traffic that is sent to the dedicated IP address is not load balanced by Network Load Balancing. Network Load Balancing ensures that all traffic to the dedicated IP address is unaffected by the Network Load Balancing current configuration, including:
When a host is running as part of the cluster.
When Network Load Balancing is disabled as a result of parameter errors in the registry.
Tip | The dedicated IP address must be the first IP address in the list of IP addresses in the TCP/IP properties. Network Load Balancing Manager automatically configures the TCP/IP properties so that the dedicated IP address is first. When you use other methods to configure the TCP/IP properties, you must ensure that the dedicated IP address is the first IP address listed in the TCP/IP properties. |
Specifying the Initial Host State
The initial host state specifies whether Network Load Balancing starts and whether the cluster host joins the cluster when the operating system starts. You need to determine the correct initial host state for the applications and services running on the cluster.Network Load Balancing starts very early in the system start sequence. As a result, a cluster host can join the cluster before the applications and services running on the cluster host are ready to handle traffic. In this situation, clients might be directed to the cluster host and experience outages.In some instances, management software, such as MOM or Applications Center 2000, is responsible for starting Network Load Balancing. The management software monitors the applications and services running on the cluster host and starts Network Load Balancing when the applications are fully operational.
In other instances, you might decide to start Network Load Balancing manually to ensure that applications and services are running before Network Load Balancing starts.Table 8.3 lists the possible settings for the initial host state and the reasons for selecting the specific initial host state.
Initial Host State | Reasons for Selecting the Initial Host State |
---|---|
Started | The applications and services running on the cluster start before Network Load Balancing. The length of time between the start of Network Load Balancing and the start of applications and services running on the cluster is negligible. |
Stopped | Management software, such as MOM or Application Center 2000, is responsible for starting Network Load Balancing automatically. The applications and services running on the cluster start after Network Load Balancing, and you want to start the cluster host manually. |
Suspended | You have performed maintenance on the cluster host, and you want to prevent the cluster from responding to clients after a restart of the cluster host. |
Example: Specifying Cluster and Cluster Host Parameters
A fictitious organization, Contoso, is designing a VPN remote access solution, based on Routing and Remote Access and Network Load Balancing. The VPN remote access solution provides remote access to Contoso's private network by establishing Point-to-Point Tunneling Protocol (PPTP) and Layer Two Tunneling Protocol (L2TP) VPN tunnels through the Internet. The VPN remote access server farm contains five servers that have identical system resources.Table 8.4 lists the cluster parameter design decisions for the VPN remote access server farm and the reasons for making those decisions.
Decision | Reason for the Decision |
---|---|
The cluster IP address and subnet mask is assigned an IP address and subnet mask that is accessible from the Internet. | Remote access users need to access the VPN server farm running on the cluster from the Internet. |
The cluster is assigned the FQDN of vpn.contoso.com. | The FQDN assigned to the cluster is the name used by remote access users when accessing the VPN server farm running on the cluster. |
The cluster operation mode is set to unicast. | The network infrastructure supports the unicast cluster operation mode. For more information about the decisions regarding setting this mode, see "Selecting the Unicast or Multicast Method of Distributing Incoming Requests" later in this chapter. |
The remote control settings are not enabled. | The remote control settings are not required, because Network Load Balancing Manager will be used to administer the cluster. |
Table 8.5 lists the cluster host parameter design decisions for the VPN remote access server farm.
Cluster Host Name | Host Priority | Dedicated IP Address | Initial Host State |
---|---|---|---|
NLBCIusterA-01 | 1 | Not specified | Started |
NLBCIusterA-02 | 2 | Not specified | Started |
NLBCIusterA-03 | 3 | Not specified | Started |
NLBCIusterA-04 | 4 | Not specified | Started |
NLBCIusterA-05 | 5 | Not specified | Started |
A unique cluster host priority is assigned to each cluster host. The cluster host initial host state is set to Started to ensure that Network Load Balancing starts automatically and that the cluster host automatically joins the cluster.
Controlling the Distribution of Client Traffic Within the Cluster
One of the intended purposes of Network Load Balancing is to distribute incoming client traffic within the cluster. You can control the distribution of client traffic within the cluster by using Network Load Balancing port rules. Port rules are criteria-based policies that allow you to direct client requests to specific cluster hosts, based on TCP and UDP port numbers.A default port rule is created during the installation of Network Load Balancing. In many instances, the default port rule is sufficient for some of the applications and services that use Network Load Balancing. When the default port rule is insufficient, you can create custom port rules. For more information about the default port rule, see "Identifying the Behavior of the Default Port Rule" later in this chapter.The default port rule is sufficient for the following applications and services:
VPN remote access farms with Routing and Remote Access
Load-balancing application hosting with Terminal Services
Tip | The ISA Server setup process defines the port rules that are necessary for ISA Server. No custom port rules are necessary for ISA Server. |
If the default port rule is sufficient for your solution, and creating custom port rules is unnecessary, see "Specifying Cluster Network Connectivity" later in this chapter.For IIS 6.0 Web farms or for custom applications, custom port rules might be required.In some instances, the applications and services might require that the same client traffic be handled differently for the same or different applications and services. The virtual IP address assigned to the cluster can handle client traffic only in one way. However, you can specify a virtual cluster for each of the applications, allowing each application to have its own load-balancing behavior. Virtual clusters are a logical construct within the cluster, and they require no additional hardware.For example, two Web applications might require different load-balancing behavior for HTTP (TCP port 80). You can create a virtual cluster for each Web application that allows different load-balancing behavior for the HTTP client traffic.You can create a virtual cluster by specifying a virtual IP address in a Network Load Balancing port rule. The virtual IP address that is assigned in the port rule is associated with the application that requires the different load-balancing behavior.Figure 8.6 illustrates the relationship between a Network Load Balancing cluster and the virtual clusters specified for the cluster. Each of the applications — Web applications A, B, and C — requires different load-balancing behavior. A virtual IP address is assigned to each virtual cluster and associated with each application. A DNS entry associates the virtual IP address with a URL for the corresponding application.

Figure 8.6: Relationship Between an NLB Cluster and Virtual Clusters
For more information about including Network Load Balancing port rules in your design, see "Identifying Applications or Services That Require Custom Port Rules" later in this chapter.Control the distribution of client traffic within a cluster by completing the following steps:
Identify the behavior of the default port rule.
Identify applications or services that require custom port rules.
Specify the client traffic to be affected by the custom port rule.
Specify the affinity and load-balancing behavior of the custom port rule.
Important | The port rules applied to each cluster host must be identical, with the exception of the load weight (in the multiple hosts filter mode) and the handling priority (in the single hosts filter mode). If there is a discrepancy between port rules on existing cluster hosts, the cluster will not converge. |
Identifying the Behavior of the Default Port Rule
When Network Load Balancing is installed on a cluster host, a default port rule is created. Table 8.6 lists the configuration of the default port rule that is created during Network Load Balancing installation.
Default Port Rule Setting | Set to This Value |
---|---|
Cluster IP address | All |
Port range: From | 0 |
Port range: To | 65535 |
Protocols | Both |
Filtering Mode | Multiple Hosts with Single affinity and Equal load weight |
Specify that the default port rule be deleted unless the default port rule is:
Appropriate for the applications and services installed on the cluster.
Modified for the applications and services installed on the cluster.
Identifying Applications or Services That Require Custom Port Rules
As previously mentioned, the default port rule is sufficient for some applications and services. However, many applications running on IIS 6.0 or custom applications that are developed by your organization might require customized port rules.These applications and services can require customized port rules to influence how load is directed to hosts in the cluster. In addition, the applications and services might require a virtual cluster when the same client traffic must be handled differently for the same or different applications and services. With virtual clusters, you can use different port rules for different Web sites or applications hosted on the cluster, provided each Web site or application has a different virtual IP address.
Identifying Applications and Services That Need Persistent Sessions
The applications and services that run on Network Load Balancing include stateful applications (those that maintain session state) and stateless applications. Maintaining session state means that the application or service establishes information on the initial connection to a cluster host and then retains the information for subsequent requests. During a user session, the same server must handle all the requests from the user in order to access that information. Applications and services that are stateless maintain no user or communication information for subsequent connections.With a single server, maintaining session state presents no difficulty, because the user always connects to the same server. However, when client requests are load balanced within a cluster, without some type of persistence, the client might not be directed to the same cluster host for a series of client requests.In Network Load Balancing, you maintain session state with a port rule affinity between the client and a specific cluster host. Port rule affinity directs all client requests from the same IP address to the same cluster host. You can use port rules to specify the port rule affinity between clients and cluster hosts. For more information about specifying port rule affinity between clients and cluster hosts, see "Specifying the Affinity and Load-Balancing Behavior of the Custom Port Rule" later in this chapter.Port rule affinity might be required by the following:
An application or service running on the cluster
Session-oriented protocols that are used by the application or service running on the cluster if the session lifetime extends across multiple TCP connections
Identifying applications that require affinity
Applications and services require persistent sessions when the application or service retains information established in a client request that is used in subsequent client requests. In some instances, the application or service is aware that load balancing is occurring, and it uses an application-specific method, such as client-side cookies, for maintaining session state. In other instances, the application or service is unaware that load balancing is occurring, and it requires Network Load Balancing to maintain affinity between the client and specific cluster hosts.An example of an application that maintains session state is a Web application that requires a user to log on to buy products through a shopping cart application. After the application authenticates the user, the user's information, including shipping information, billing information, and items in the shopping cart, is retained for subsequent requests. The session state is maintained until the user completes or cancels the purchase.
Table 8.7 lists common Web application types and their requirements for affinity provided by cluster port rules.
Web Application | Description | Requires Port Rule Affinity |
---|---|---|
Static, HTML-based applications | Application maintains no user information. | |
Static, HTML-based applications with CGI | CGI portion of the application might retain user information. | • |
Web application that uses client-side cookies | Application sends a cookie to the client when the application session is initiated. On subsequent requests, the client sends the cookie along as part of the request. The instance of the application running on individual cluster hosts is capable of retrieving application session state by using the cookie in the request. | |
ASP.NET applications with session state persistence | ASP.NET applications support a method for maintaining session state on a centralized session state server or on a server running SQL Server 2000. Because the session state is managed centrally, any cluster host can recover session state information, and affinity is not required. For more information about ASP.NET application session state, see "Deploying ASP.NET Applications in IIS 6.0" in Deploying Internet Information Services (IIS) 6.0 of this kit (or see "Deploying ASP .NET Applications in IIS 6.0" on the Web at http://www.microsoft.com/reskit). | |
ASP applications and ASP.NET applications without session state | These applications create session information when the user first starts the application. The session information is managed by the instance of the ASP or ASP.NET dynamic-link libraries (DLLs) running on the cluster host. On subsequent requests, the session state is maintained for the application on that specific cluster host. No session state information is shared among cluster hosts. | • |
For applications that are written by your organization, consult with the application developers to determine if, or how, any session information is retained. When your applications run on Terminal Services, port rule affinity is required.
Identifying session-oriented protocols that require affinity
Even when the application or service is stateless, the protocol required by the application or service might be session-oriented. For example, an application might be based on static HTML, but it might use SSL to provide security.However, other session-oriented protocols, such as SSL, can operate correctly without affinity, but they incur a significant decrease in performance. When an SSL session is established with a cluster host, an SSL session ID is exchanged between the cluster host and the client. If the client makes a subsequent request to the same cluster host, the same session ID can be used.If the client makes a subsequent request to a different cluster host, a new session ID must be obtained. The overhead for obtaining a new SSL session ID is five times more than for reusing a session ID. By using port rule affinity, SSL session IDs are reused whenever possible.
Compensating for Differences in System Resources
During the IT life cycle of your cluster, new cluster hosts might be added to the cluster. These new cluster hosts are typically new computers with improved system resources, such as processor speed, number of processors, or memory. Because of the improved system resources, the new cluster hosts are capable of servicing more clients.The default port rule, created during the installation of Network Load Balancing, evenly distributes network requests among cluster hosts, regardless of the available system resources on individual cluster hosts. Cluster hosts with inadequate system resources can provide slower response times to clients than other cluster hosts with adequate system resources. You can compensate for differences in available system resources on individual cluster hosts by directing a higher percentage of client requests to the cluster hosts with adequate system resources.The method of compensating for the differences in system resources is based on the filter mode in the port rule. Only the Multiple Hosts port filter mode supports compensating for differences in cluster host resources. For more information about port rule filter modes and, specifically, the Multiple Hosts port filter mode, see "Specifying the Affinity and Load-Balancing Behavior of the Custom Port Rule" later in this chapter.
Deciding to Include Virtual Clusters
In some instances, you might want a cluster to appear logically as multiple Network Load Balancing clusters. In Network Load Balancing, you can define virtual clusters that allow you to apply a unique set of port rules to each virtual cluster.Include virtual clusters in the following situations:
An application or service running on a cluster must exhibit different affinity behavior.For example, you might have an FTP site hosted on multiple servers running IIS 6.0. You want FTP downloads to be load balanced across all cluster hosts. However, you want FTP uploads to be sent only to one cluster host. You can create a virtual cluster that supports FTP downloads with no affinity and another virtual cluster that directs FTP uploads to a specific cluster host.
Multiple applications or services running on the same cluster require different affinity behaviors for the same client traffic type (same TCP/UDP port number).For example, you might have two Web applications hosted on the same cluster running IIS 6.0. Both applications use HTTP (TCP port 80), but one application maintains session state while the other application does not. You can create a virtual cluster that supports the application that maintains session state with affinity and another virtual cluster that supports the stateless application without affinity.
The maintenance and operations of an application or service must be independent of other applications and services running on the cluster.For example, you might have two applications hosted on the same cluster. For ease of maintenance, upgrade, or other operations tasks, you can create a virtual cluster for the application. You can stop client communication with the virtual cluster and then perform application maintenance without affecting other applications running on the cluster.
Tip | You can stop client communications with a virtual cluster by using the drain parameter from Nlb.exe or Network Load Balancing Manager, both of which are a part of Windows Server 2003. |
Virtual clusters are specialized port rules that require a virtual IP address, which is assigned to the virtual cluster. You define a virtual cluster by specifying one or more port rules that share the same virtual IP address. As with regular port rules, the port rules that define the virtual cluster must be the same for all cluster hosts in the cluster. For more information about port rule specifications, see "Specifying Client Traffic To Be Affected by the Custom Port Rule" later in this chapter.
Specifying Client Traffic To Be Affected by the Custom Port Rule
Port rules can be divided into two parts. The first part of a port rule identifies the client traffic to be affected by the port rule. You identify the client traffic affected by the port rule by specifying the cluster IP address and the TCP (or UDP) port range. Network Load Balancing examines all client requests and determines if a port rule applies to the client request.The second part of a port rule determines the affinity and load-balancing characteristics of the cluster. For more information about the affinity and load-balancing characteristics of the cluster, see "Specifying the Affinity and Load-Balancing Behavior of the Custom Port Rule" later in this chapter.Specify the client traffic to be affected by the port rule by completing the following steps:
Designate the value for the cluster IP address.
Specify the port range for the port rule as a starting port number (From) and an ending port number (To).
Specify the protocol for the port range that you specified.
Note | For a Word document to assist you in documenting your port rule settings, see "NLB Cluster Host Worksheet" (Sdcnlb_1.doc) on the Windows Server 2003 Deployment Kit companion CD (or see "NLB Cluster Host Worksheet" on the Web at http://www.microsoft.com/reskit). |
Designating the Value for the Cluster IP Address
If you are specifying a:
Global port rule for a cluster, specify All.
Port rule for a virtual cluster, specify the virtual IP address for the virtual cluster.
For more information about deciding when to include virtual clusters in your design, see "Identifying Applications or Services That Require Custom Port Rules" earlier in this chapter.
Specifying the Port Range
Specify the port range for the port rule:
Based on the TCP and UDP ports used by the applications and services running on the Network Load Balancing cluster.
As a starting port number (From) and an ending port number (To).
For example, if your Network Load Balancing cluster supports Web applications that use only TCP port 80, you specify a starting port number of 80 and an ending port number of 80.
Specifying the Protocol for the Port Range
If the applications and services are using:
TCP, specify TCP.
UDP, specify UDP.
Both TCP and UDP, specify Both.
Specifying the Affinity and Load-Balancing Behavior of the Custom Port Rule
The second part of a port rule affects the affinity and load-balancing behavior of the port rule. The affinity and load-balancing behavior of a port rule are specified by the filter mode of the port rule. Only one filter mode can be selected for each port rule. Table 8.8 describes the filter modes that are available.
Filter Mode | Description |
---|---|
Multiple Hosts | Permits all cluster hosts to actively respond to client requests. This is the most common filter mode, because it allows the affinity and load-balancing characteristics to be customized. |
Single Host | Allows only one cluster host in the cluster to actively respond to client requests. Could be useful when providing backup servers. For example, in an FTP site where users upload files, single host mode allows one host to receive FTP uploads. If the host fails, the host with the next highest priority takes over for the failed host. |
Disable | Prevents the cluster from responding to a specific type of client traffic. |
Note | For a Word document to assist you in documenting your port rule settings, see "NLB Cluster Host Worksheet" (Sdcnlb_1.doc) on the Windows Server 2003 Deployment Kit companion CD (or see "NLB Cluster Host Worksheet" on the Web at http://www.microsoft.com/reskit). |
Specifying Multiple Hosts Filter Mode Settings
The Multiple Hosts filter mode provides load balancing across multiple cluster hosts. One of the complexities introduced by load balancing is persistent relationships (affinity) between clients and cluster hosts. For more information about affinity, see "Identifying Applications or Services That Require Custom Port Rules" earlier in this chapter.Another complexity introduced by load balancing is compensation for differences in cluster host system resources on the cluster over time. Based on differences in available system resources or applications, specific cluster hosts might not be capable of managing the same number of client requests. For more information about compensating for differences in cluster host system resources, see "Identifying Applications or Services That Require Custom Port Rules" earlier in this chapter.Specify the settings for the Multiple Hosts filter mode by completing the following steps:
Preserve persistent application sessions by specifying the affinity between a client and a specific cluster host.
Compensate for differences in system resources by specifying the load weight settings.
Selecting the affinity between a client and a specific cluster host
You can override the default behavior by selecting other port rule affinity options. Table 8.9 lists the port rule affinity options and the reasons for selecting them.
Option | Reasons for Selecting This Option |
---|---|
None | You want to ensure even load balancing among cluster hosts. Client traffic is stateless (for example, HTTP traffic). |
Single | You want to ensure that requests from a specific client (IP address) are sent to the same cluster host. Client state is maintained across TCP connections (for example, HTTPS traffic). |
Class C | Client requests from a Class C IP address range (instead of a single IP address) are sent to the same cluster host. Clients use multiple proxy servers to access the cluster, and they appear to have multiple IP addresses within the same Class C IP address range. Client state is maintained across TCP connections (for example, HTTPS traffic). |
Single affinity is the most common selection when applications require that information about the user state be maintained across TCP connections. Examples of these types of applications include applications that use SSL or applications that retain user information, such as e-commerce shopping cart applications.Applications that use SSL with Single affinity are efficient because the SSL session IDs are reused. Negotiating a new SSL session ID requires five times the amount of overhead as reusing a SSL session ID. Although negotiating the SSL session ID is transparent to the client, the cumulative increase in overhead could degrade the performance of the cluster.Applications that retain user information can resolve the affinity requirement by using Network Load Balancing affinity or by using a common session state database or server. Applications that have a common session state database or server are combined with cookie-based affinity to allow any cluster host to restore the appropriate session state. If an application has a common session state database or server, you can select a port rule affinity of None if SSL is not part of the solution.If an application retains user information by requiring that client transactions are completed on the same cluster host, select a port rule affinity of Single. For a discussion of when an application requires persistent connections to specific cluster hosts, along with an explicit discussion regarding SSL, see "Identifying Applications or Services That Require Custom Port Rules" earlier in this chapter.Class C is used when Internet clients connect through proxy servers with different IP addresses within the same Class C IP address range. Some Internet service providers, such as America Online (AOL) in the United States, have proxy servers with different Class C IP addresses. Using different Class C IP addresses on the proxy servers can break the affinity between the client and the cluster host. In situations like this, other methods might be required to preserve cluster host affinity. For example, Application Center 2000 supports cookie-based affinity, which solves the problem of proxy servers with different Class C IP addresses. Alternatively, the application can be redesigned to maintain application session state in a common database or session state server, such as that provided by IIS 6.0 and ASP.NET. When possible, allow the application to maintain session state in this manner because the application will be more robust and scalable.
Specifying the load weight settings for each cluster host
The default port rule that is created during the installation of Network Load Balancing uses equal load weights. To override the default behavior, you can specify a custom load weight to be handled by each cluster host, as shown in Table 8.10.
Method | Description |
---|---|
Equal | Evenly distributes client requests across all cluster hosts when the available system resources are the same. |
Load weight | Distributes client requests based on the available system capacity because of differences in the: Hardware configuration of each cluster host. Applications and services running on each cluster host. |
Specify the amount of client requests to be handled by each cluster host by completing one of the following steps:
Specify Equal to ensure that client requests are evenly distributed across all cluster hosts, unless the cluster hosts do not have the same available system capacity.
Otherwise, specify the load weight value for the cluster host.
The load weight value can range from 0 (zero) to 100. You can prevent a cluster host from handling any client requests by specifying a load weight of 0.The percentage of client requests that are handled by each cluster host is computed by dividing the local load weight by the sum of all load weights across the cluster. Table 8.11 shows an example of the relationship between load weight and the percentage of client requests that are handled by each cluster host.
Cluster Host | Load Weight | Percentage of Client Requests |
---|---|---|
ClusterHost-A | 50 | 40% |
ClusterHost-B | 50 | 40% |
ClusterHost-C | 25 | 20% |
Specifying Single Host Filter Mode Settings
As with the cluster host priority setting for Network Load Balancing, which is discussed in "Specifying the Cluster Host Parameters" earlier in this chapter, you must specify the handling priority value for each cluster host that has the port rule.Specify the handling priority for Single Host filter mode by completing the following steps:
Specify a value of 1 as the priority for the cluster host that will always handle the client traffic designated in the port rule criteria.
Increase the value assigned to the previous cluster host by 1, assign that value to the next cluster host, and repeat the process until you have specified the handling priority for all cluster hosts.
Important | If you specify the same handling priority for two cluster hosts, the last cluster host that starts will fail to join the cluster. An error message describing the problem will be written to the Windows system event log. The existing cluster hosts will continue to operate as before if cluster convergence completed previously. Otherwise, convergence must complete before traffic is handled by the cluster. |
Specifying Disable Filter Mode Settings
As previously mentioned, the Disabled filter mode means that the client traffic corresponding to the port rule is blocked. Unlike the other filter modes, after you select this mode, no additional settings or values are required.
Example: Controlling the Distribution of Client Traffic Within the Cluster
An organization implements the following solutions, which include Network Load Balancing, to reduce outages and improve performance:
VPN remote access, based on Routing and Remote Access
E-commerce Web applications, based on IIS 6.0
A customer support FTP site, based on IIS 6.0
VPN remote access solution with Routing and Remote Access
The VPN remote access solution, based on Routing and Remote Access, provides remote access to the organization's private network by establishing PPTP and L2TP VPN tunnels through the Internet.The VPN remote access server farm contains five servers that have identical system resources. Network Load Balancing is enabled on the network adapters connected to the Internet. A cluster IP address has been configured for the cluster, and the appropriate DNS entries have been determined but not yet created.
Because the default port rule provides the appropriate affinity and load balancing, no custom port rules are required for the cluster.
E-commerce Web application solution with IIS 6.0
The e-commerce Web application solution, based on IIS 6.0, includes the following applications:
Web-based, e-commerce application built with static HTML
Web-based, e-commerce application built with ASP
Both applications use:
HTTPS (TCP port 443)
HTTP (TCP Port 80)
The application that uses ASP maintains user session information after the user is authenticated.The two e-commerce Web application solutions are combined on the same IIS 6.0 Web farm and, subsequently, the same Network Load Balancing cluster. The IIS 6.0 Web farm contains four computers that have identical system resources. Network Load Balancing is enabled on the network adapters connected to the Internet. A cluster IP address is configured for the cluster, and the appropriate DNS entries have been determined but not yet created.To facilitate the combination of the two e-commerce Web applications into a single Web farm and cluster, each e-commerce application is assigned to a virtual cluster. By assigning each e-commerce application to a virtual cluster, client access can be stopped individually to allow operations tasks, such as upgrades, to be performed without disrupting client access to the other e-commerce application.Table 8.12 lists the cluster, virtual cluster, and cluster hosts in the organization's e-commerce solution.
Cluster Name | Solution | Type | Cluster Host |
---|---|---|---|
NLBCluster-B | Web-based, e-commerce applications | Cluster | NLBClusterB-01 NLBClusterB-02 NLBClusterB-03 NLBClusterB-04 |
VirCluster-A | IIS 6.0 and HTML | Virtual cluster | SameasNLBCIuster-B |
VirCluster-B | IIS 6.0 and ASP | Virtual cluster | SameasNLBCIuster-B |
Table 8.13 lists the port rules that meet the requirements of the e-commerce Web application solution that includes IIS 6.0 and Network Load Balancing.
Cluster IP Address | Start | End | Protocol | Filtering Mode | Load Weight | Affinity |
---|---|---|---|---|---|---|
VirtualIP-A | 80 | 80 | TCP | Multiple | Equal | None |
VirtualIP-A | 443 | 443 | TCP | Multiple | Equal | Single |
VirtualIP-B | 80 | 80 | TCP | Multiple | Equal | Single |
VirtualIP-B | 443 | 443 | TCP | Multiple | Equal | Single |
Because the port rules, listed in Table 8.13, have a specified load weight of Equal, the same port rules are used for all the cluster hosts. The virtual clusters, VirtualIP-A and VirtualIP-B, are dedicated to the respective e-commerce applications. Because both applications use HTTP (TCP port 80) and HTTPS (TCP port 443), port rules must be specified for each protocol in each virtual cluster.
Customer support FTP site solution with IIS 6.0
The customer support FTP site solution:
Is based on IIS 6.0.
Provides secured and unsecured access to files.
Allows users to upload files to specified areas on the FTP site.
Requires file uploads that are centralized on one FTP server to avoid users uploading duplicate files.
Uses TCP port 20 for FTP.
Uses TCP port 21 for FTP.
The customer support FTP site runs on an IIS 6.0 farm and, subsequently, a Network Load Balancing cluster. The IIS 6.0 farm contains three computers that have identical system resources. Network Load Balancing is enabled on the network adapters connected to the Internet. A cluster IP address has been configured for the cluster, and the appropriate DNS entries have been determined but not yet created.To support the number of simultaneous users who are performing FTP downloads, all FTP download requests must be load balanced across the entire IIS 6.0 farm. However, to ensure that users upload files to only one location, FTP uploads must be directed to only one server in the IIS 6.0 farm.To facilitate the differences in cluster host affinity between FTP uploads and downloads, each direction of FTP transfer is assigned to a different virtual cluster. By assigning FTP uploads to a virtual cluster and FTP downloads to another virtual cluster, the organization ensures that FTP downloads can be load balanced across all cluster hosts, while FTP uploads are sent to only one cluster host in the cluster.
Table 8.14 lists the cluster, virtual cluster, and cluster hosts selected in the organization's customer support FTP site solution.
Cluster Name | Solution | Type | Cluster Host |
---|---|---|---|
NLBCluster-C | FTP site | Cluster | NLBCIusterC-01 NLBCIusterC-02 NLBCIusterC-03 |
VirCluster-C | FTP download | Virtual cluster | All cluster hosts can be used for download. |
VirCluster-D | FTP upload | Virtual cluster | NLBCIusterC-03 is to be used for upload. |
Table 8.15 lists the port rules that meet the requirements of the organization's customer support FTP site solution, which includes IIS 6.0 and Network Load Balancing.
Cluster IP Address | Start | End | Protocol | Filtering Mode | Load Weight | Affinity | Handling Priority |
---|---|---|---|---|---|---|---|
VirtuallP-C | 20 | 21 | TCP | Multiple Hosts | Equal | Single | NA |
VirtuallP-D | 20 | 21 | TCP | Single Host | NLBCIusterC-03 = 1 NLBCIusterC-01 = 2 NLBCIusterC-02 = 3 |
The port rules that define VirCluster-C are identical for all the cluster hosts in NLBCluster-C. The port rules that define VirCluster-D are unique for each cluster host in NLBCluster-C, because the handling priority for each cluster host is unique. NLBClusterC-03 is the cluster host that is designated for FTP uploads, and it is assigned a handling priority of 1 to ensure that all file uploads are sent to NLBClusterC-03. The handling priority for NLBClusterC-01 and for NLBClusterC-02 must be unique and of a lower priority than for NLBClusterC-03.
Specifying Cluster Network Connectivity
The scalability and availability improvements provided by your cluster depend on the network connectivity that is provided to the cluster. An improperly designed network infrastructure can cause client response-time problems and application outages. Specify the connections between the cluster and the client computers, the other servers within your organization's network, and the operation consoles to achieve the connectivity goals of your network.
The goals of network connectivity to the cluster are to provide:
High-capacity network connectivity to ensure adequate client response time.
Redundant routing and switching infrastructure.
Restricted management of the cluster and cluster resources.
Valid IP configuration for each network interface in each cluster host.
Specify the cluster network connectivity by completing the following steps:
Select the number of network adapters in each cluster host.
Select the unicast method or the multicast method of distributing incoming client requests to cluster hosts.
Include support for teaming network adapters.
Determine the network infrastructure requirements.
After you have specified the cluster network connectivity, document your decisions on your organization's network diagram (for example, on a Microsoft Visio drawing).
Note | For a Word document to assist you in documenting your decisions, see "NLB Cluster Host Worksheet" (Sdcnlb_1.doc) on the Windows Server 2003 Deployment Kit companion CD (or see "NLB Cluster Host Worksheet" on the Web at http://www.microsoft.com/reskit). |
Selecting the Number of Network Adapters in Each Cluster Host
At a minimum, you must connect each cluster host in your cluster to a network segment that has connectivity to the client computers. In most solutions, you need to connect each cluster host to other servers and to management and operations consoles in your organization.With the appropriate cluster network connectivity, your solution will be properly secured, highly available, highly scalable, and easy to manage. Any design deficiencies in the cluster network connectivity portion of the design can compromise the security, availability, scalability, and manageability of the cluster.
Select the number of network adapters in each cluster host by completing the following steps:
Identify a network interface, referred to as the cluster adapter, which provides connectivity to the client computers.Include the appropriate IP configuration (IP address, subnet mask, and so forth) for the cluster adapter, so that the cluster host is on the same physical subnet or virtual subnet.All hosts of the same cluster must be on the same physical or virtual LAN (VLAN). In instances where the cluster hosts are not connected to the same physical subnet, ensure that the cluster hosts are connected to a virtual subnet. If all hosts are not connected to the same physical subnet, make sure that your routing and switch infrastructure supports virtual subnets.
Specify a network interface, referred to as the management adapter, which will provide connectivity to other servers within your organization and to management and operations consoles within your organization.Include the appropriate IP configuration (IP address, subnet mask, and so forth) for the management adapter.
Important | To prevent unauthorized altering of the Network Load Balancing configuration, enable Network Load Balancing administration only on the management adapter. This is done by restricting traffic through your firewalls or routers. |
In instances where only the cluster adapter is included and your applications require peer-to-peer communications between cluster hosts (beyond the cluster heartbeat traffic), see the discussion on unicast mode and multicast mode in "Selecting the Unicast or Multicast Method of Distributing Incoming Requests" later in this chapter.
Figure 8.7 illustrates cluster network connectivity that includes connectivity to clients, other servers in an organization, and management and operations consoles.

Figure 8.7: Cluster Network Connectivity
Selecting the Unicast or Multicast Method of Distributing Incoming Requests
All cluster hosts in a cluster receive all incoming client requests that are destined for the virtual IP address that is assigned to the cluster. The Network Load Balancing load-balancing algorithm, which runs on each cluster host, is responsible for determining which cluster host processes and responds to the client request.You can distribute incoming client requests to cluster hosts by using unicast or multicast methods. Both methods send the incoming client requests to all hosts by sending the request to the cluster's MAC address.When you use the unicast method, all cluster hosts share an identical unicast MAC address. Network Load Balancing overwrites the original MAC address of the cluster adapter with the unicast MAC address that is assigned to all the cluster hosts.When you use the multicast method, each cluster host retains the original MAC address of the adapter. In addition to the original MAC address of the adapter, the adapter is assigned a multicast MAC address, which is shared by all cluster hosts. The incoming client requests are sent to all cluster hosts by using the multicast MAC address.Select the unicast method for distributing client requests, unless only one network adapter is installed in each cluster host and the cluster hosts must communicate with each other. Because Network Load Balancing modifies the MAC address of all cluster hosts to be identical, cluster hosts cannot communicate directly with one another when using unicast. When peer-to-peer communication is required between cluster hosts, include an additional network adapter or select multicast mode. When the unicast method is inappropriate, select the multicast method.For more information about the interaction between the method of distributing incoming requests and layer 2 switches, see article Q193602, "Configuration Options for WLBS Hosts Connected to a Layer 2 Switches," in the Microsoft Knowledge Base. To find this article, see the Microsoft Knowledge Base link on the Web Resources page at www.microsoft.com/windows/reskits/webresources.
Selecting the Unicast Method
In the unicast method:
The cluster adapters for all cluster hosts are assigned the same unicast MAC address.
The outgoing MAC address for each packet is modified, based on the cluster host's priority setting, to prevent upstream switches from discovering that all cluster hosts have the same MAC address.The modification of the outgoing MAC address is appropriate for switches. When a hub is used to connect the cluster hosts, disable the modification of the outgoing MAC address. On Windows Server 2003, you can disable modification of outgoing addresses by setting the value of the registry entry MaskSourceMAC, of data type REG_DWORD, to 0x0. MaskSourceMAC is located in HKLM\SYSTEM\CurrentControlSet\Services\WLBS\Parameters\Interface\Adapter-GUID (where Adapter-GUID is the long GUID assigned to the network adapter in the server).
Caution | Do not edit the registry unless you have no alternative. The registry editor bypasses standard safeguards, allowing settings that can damage your system, or even require you to reinstall Windows. If you must edit the registry, back it up first and see the Registry Reference on the Microsoft Windows Server 2003 Deployment Kit companion CD or at http://www.microsoft.com/reskit. |
The unicast MAC address is derived from the cluster's IP address to ensure uniqueness outside the cluster hosts.
Communication between cluster hosts, other than Network Load Balancing-related traffic (such as heartbeat), is only available when you install an additional adapter, because the cluster hosts all have the same MAC address.
Although the unicast method works in all routing situations, it has the following disadvantages:
A second network adapter is required to provide peer-to-peer communication between cluster hosts.
If the cluster is connected to a switch, incoming packets are sent to all the ports on the switch, which can cause switch flooding.
Selecting the Multicast Method
In the multicast method:
The cluster adapter for each cluster host retains the original hardware unicast MAC address (as specified by the hardware manufacture of the network adapter).
The cluster adapters for all cluster hosts are assigned a multicast MAC address.
The multicast MAC is derived from the cluster's IP address.
Communication between cluster hosts is not affected, because each cluster host retains a unique MAC address.
By using the multicast method with Internet Group Membership Protocol (IGMP), you can limit switch flooding, if the switch supports IGMP snooping. IGMP snooping allows the switch to examine the contents of multicast packets and associate a port with a multicast address. Without IGMP snooping, switches might require additional configuration to tell the switch which ports to use for the multicast traffic. Otherwise, switch flooding occurs, as with the unicast method.The multicast method has the following disadvantages:
Upstream routers might require a static Address Resolution Protocol (ARP) entry. This is because routers might not accept an ARP response that resolves unicast IP addresses to multicast MAC addresses.
Without IGMP, switches might require additional configuration to tell the switch which ports to use for the multicast traffic.
Upstream routers might not support mapping a unicast IP address (the cluster IP address) with a multicast MAC address. In these situations, you must upgrade or replace the router. Otherwise, the multicast method is unusable.
Including Support for Teaming Network Adapters
Many hardware vendors support teaming network adapters to increase network bandwidth capacity and to provide an additional level of network redundancy. Teaming network adapters are multiple network adapters in the same computer that logically act as a single, virtual network adapter. The virtual network adapter can provide load balancing of traffic between the physical network adapters or automatic failover in the event that one of the network adapters fails.
Teaming network adapter drivers and Network Load Balancing both try to manipulate the MAC addresses of the network adapters. As a result, teaming network adapters might work only in limited configurations. As a general rule, avoid including teaming network adapters in your Network Load Balancing solutions.
Note | Because each implementation of teaming network adapters is vendor-specific, the support for teaming network adapters is provided by the hardware vendor, and it is beyond the scope of Microsoft Product Support Services. For more information about teaming network adapters in your Network Load Balancing solution, consult the teaming network adapter hardware recommendations. |
Determining the Network Infrastructure Requirements
The network infrastructure affects your Network Load Balancing solution more than any other component. No matter how much Network Load Balancing enables you to scale out your solution, an inadequate routing and switching infrastructure can create a number of problems.Even if you optimize the Network Load Balancing cluster, an inadequate routing and switching infrastructure can restrict available network throughput and prevent clients from achieving any improvement in response times. Additionally, an inadequate routing and switching infrastructure can allow a single failure in a router, switch, or network path to disrupt communications with the cluster.Determine the network infrastructure requirements for your cluster by completing the following steps:
Determine the IP subnet requirements for the cluster.
Determine how the cluster handles inbound and outbound traffic.
Determine when to include switches or hubs to connect cluster hosts to one another.
For more information about how the network infrastructure affects availability, see "Ensuring Availability in NLB Solutions" later in this chapter. For more information about how the network infrastructure affects scalability, see "Scaling NLB Solutions" later in this chapter.
Determining Cluster IP Subnet Requirements
Network Load Balancing requires that all cluster hosts be on the same IP subnet or virtual IP subnet (VLAN). This is because all cluster hosts share the cluster's virtual IP address. The routing infrastructure sends client requests to all cluster hosts by using a virtualized unicast MAC address or a multicast MAC address. For more information about unicast and multicast addressing, see "Selecting the Unicast or Multicast Method of Distributing Incoming Requests" earlier in this chapter.
Determining How the Cluster Handles Inbound and Outbound Traffic
Network Load Balancing cluster traffic is handled differently for inbound and outbound traffic. The differences between inbound client requests and outbound responses affect the network infrastructure and the use of switches and hubs.Inbound cluster traffic is sent to all cluster hosts by using broadcasts or multicast traffic, which is required so that all cluster hosts can receive the inbound traffic. Switches learn where to send packets by watching the source packets that are sent from the computers directly connected to the switch. After the switch learns a computer's location, the switch sends subsequent packets to the same switch port.From an inbound-traffic perspective, the behavior of the switch is similar to a hub, because inbound cluster traffic is sent to all switch ports. However, from an outbound-traffic perspective, the switch provides isolation by preventing other cluster hosts from seeing a response to a client from the cluster host that services the client request.For more information about the behavior of Network Load Balancing traffic, see the Network Load Balancing Technical Overview link on the Web Resources page at http://www.microsoft.com/windows/reskits/webresources.
Determining When to Include Switches or Hubs for Interconnecting Cluster Hosts
For most networks, switches are the preferred technology for connecting network devices to one another. Although many existing network infrastructures have hubs, switches are typically used in new deployments.Although Network Load Balancing works in most configurations of switches and hubs, some configurations allow for optimal performance, and they provide ease of maintenance and operations. The recommended configuration is to use switches for interconnecting cluster hosts. Configurations that include hubs are also supported, but they require more complex network infrastructure and configuration.
The supported configurations for switches and hubs include:
Cluster hosts connected to a switch that is dedicated to the cluster.
Cluster hosts connected to a switch that is shared with other devices.
Cluster hosts connected to a hub that is connected to a switch.
Cluster hosts connected to a switch that is dedicated to the cluster
In this configuration, the cluster hosts connect to a switch that is dedicated to the cluster. Because the inbound cluster traffic is sent to all ports on the switch, the primary advantage of a switch for inbound traffic — segregating traffic to a limited number of switch ports — is lost.However, for outbound traffic, only the cluster host responding to the client request is aware of the traffic, because the switch isolates the outbound traffic to the port connected to the responding cluster host. This reduces the congestion of traffic for all the cluster hosts that are connected to the switch.
Cluster hosts connected to a switch that is shared with other devices
In this configuration, the cluster hosts are connected to a switch that is shared with other devices. The switch sends inbound cluster traffic to the other devices as well, creating unnecessary traffic for the other devices.You can isolate the inbound cluster traffic to only the cluster hosts by establishing a VLAN comprising the ports that connect to the cluster hosts. After you establish the VLAN, inbound cluster traffic is sent only to the cluster hosts and not to the other devices that are attached to the same switch. Using a VLAN to segregate inbound client traffic works for the unicast or multicast method of distributing incoming requests. You can also use the multicast method with IGMP to limit inbound cluster traffic only to the cluster hosts.Because outbound cluster traffic is sent directly to the client that originated the request, only the cluster host responding to the request sees the outbound cluster traffic. All other cluster hosts, and the other devices connected to the same switch, are unaware of the outbound cluster traffic.
Cluster hosts connected to a hub that is connected to a switch
From the perspective of inbound cluster traffic, a switch provides no additional benefit over a hub, except for being able to define a VLAN that isolates inbound traffic from other devices attached to the switch. Because all cluster hosts receive the inbound cluster traffic, the switch sends all inbound traffic to all ports that are attached to the switch (or ports that are assigned to the same VLAN).From the perspective of outbound cluster traffic, a switch provides a greater advantage, because it reduces the network contention for outbound cluster traffic. With a switch, only the cluster host originating the outbound cluster traffic is aware of the traffic, because outbound traffic is switched. Using a hub causes network contention for all cluster hosts. With a hub, all cluster hosts receive, and subsequently determine if they need to process, the outbound cluster traffic.To eliminate the network contention caused by using a hub, install an additional network adapter in each cluster host for the purpose of responding to client traffic. Inbound requests are received through the cluster adapters in each cluster host, while the outbound responses are sent to the clients through the additional network adapter.Because of the increased complexity of configuration (adding an additional network adapter to each cluster host) and network infrastructure (adding an additional switch to connect all the additional network adapters to the clients), this is not a recommended configuration. Because a switch is required for the additional network adapters, it is easier to eliminate the additional network adapters and connect the cluster adapters to a switch.If you are considering adding network adapters to improve cluster performance, consider increasing the data rate of the cluster network adapters and corresponding network infrastructure, instead of adding network adapters. For example, to increase available network bandwidth, specify 100 megabits per second (Mbps) network adapters instead of 10 Mbps network adapters (along with the appropriate upgrades to intermediary switches and routers). For more information about increasing the available network bandwidth to the cluster, see "Increasing Available Network Bandwidth to the Cluster" later in this chapter.
Example: Specifying Cluster Network Connectivity
An organization has e-commerce Web applications that are accessed by users on the Internet. The organization's design includes Network Load Balancing to eliminate any application outages and improve performance. The e-commerce Web applications, running on IIS 6.0 and Windows Server 2003, will reside in the organization's perimeter network, which is located between the Internet and the organization's private network.Figure 8.8 illustrates the existing firewalls, Internet connectivity, and connectivity to the organization's private network.

Figure 8.8: Existing Network Diagram Before the E-Commerce Web Application Solution
The e-commerce Web application requires support for 1,500 simultaneous users and an acceptable data transfer rate of 10 kilobits per second (Kbps) for each user, for a total aggregate data rate of 15 Mbps.The organization has conducted lab testing on the devices in the organization's design, including routers, switches, network segments, and servers, and it has determined their capacity as listed in Table 8.16.
Device | Results Verified in the Lab |
---|---|
IIS server | Supports 200 simultaneous users. Provides total aggregate data rates up to 50 Mbps. |
Router | Supports virtual IP subnets (VLANs). Provides total aggregate data rate of 20 Mbps. Requires manual ARP registration of a unicast IP address with a multicast MAC address. |
Switch | Supports virtual IP subnets (VLANs). Provides total aggregate data rate of 60 Mbps. |
Figure 8.9 illustrates the organization's network infrastructure after including the e-commerce Web solution.

Figure 8.9: E-Commerce Web Solution with Network Load Balancing
Table 8.17 lists the design decisions in specifying cluster network connectivity and the reasons for making the decisions.
Decision | Reason for the Decision |
---|---|
Include eight IIS servers (cluster hosts). | Each IIS server supports a maximum of 200 users, and eight servers are required to achieve 1,500 simultaneous users. |
Specify unicast mode for distributing incoming client requests. | Network infrastructure supports unicast mode. |
Specify VLAN-01 between Switch-01 and Switch-02. | All hosts on the cluster must belong to the same IP subnet. |
Place cluster hosts on Switch-01 and Switch-02. | Bandwidth of a single switch is insufficient to handle cluster host traffic. |
Ensuring Ease of Cluster Management and Operations
After your Network Load Balancing cluster is deployed, the operations staff in your organization takes primary responsibility for the day-to-day operations of the cluster. In addition to normal administration tasks required to maintain the operating system, such as applying service packs and upgrading the operating system, over the IT life cycle of your cluster, the operations team also maintains and upgrades applications on the cluster.Follow the guidelines presented in Table 8.18 to help create Network Load Balancing clusters that are easier to manage and operate.
Guideline | Explanation |
---|---|
Include sufficient cluster hosts to support maintenance and failover needs. | Cluster management is easier if enough cluster hosts exist to support client requests when one of the hosts is offline due to maintenance, upgrade, or failure. During lab testing, determine the number of cluster hosts required to support the required client traffic, and then add at least one additional cluster host. |
Create a network infrastructure design that can accommodate a change in the number of cluster hosts. | The network infrastructure design must be flexible enough to allow the removal and addition of cluster hosts. When creating the supporting network infrastructure design, include additional ports on the switches or hubs connecting cluster hosts to support the addition of cluster hosts, as required.By adding additional network infrastructure support, cluster hosts can be added easily by the operations team without redesigning or recabling the existing network infrastructure. |
During the IT life cycle of the cluster, upgrades are performed on the cluster by the operations team. The operations team typically performs these as rolling upgrades, by upgrading individual cluster hosts, one at a time, until the entire cluster is upgraded. For more information about performing rolling upgrades, see "Deploying Network Load Balancing" in this book.Scaling out improves cluster performance by adding cluster hosts to the cluster and distributing client traffic across more cluster hosts. For more information about scaling your cluster, see "Scaling NLB Solutions" later in this chapter.
Example: Ensuring Ease of Cluster Management and Operations
An organization has five mission-critical applications that must be run by remote users. These remote users connect to the organization's private network through a VPN tunnel. The remote users run the applications on servers running Terminal Services.Because of the number of simultaneous remote users, three servers running Terminal Services are required. The number of servers was determined during the lab-testing phase of the solution. To provide load balancing and fault tolerance, the servers are part of a Network Load Balancing cluster.The number of remote users that simultaneously access the Terminal Services farm is expected to increase over the lifetime of the farm. Also, the applications running on the farm need to be upgraded periodically, and this requires that the servers be restarted.Table 8.19 lists the design decisions for ensuring the cluster is easier to maintain and operate and the reasons for making the decisions.
Decision | Reason for the Decision |
---|---|
Include an additional cluster host. | Ensures that the upgrades to individual servers in the Terminal Services farm can be performed without affecting client response times. |
Provide unused ports on Switch-01. | Provides expansion for the addition of cluster hosts (servers running Terminal Services). |
Figure 8.10 illustrates the network infrastructure after the optimization of the design to ensure ease of cluster management and operations.

Figure 8.10: Infrastructure Optimized for Ease of Cluster Management and Operations