Quality of Service Design
Globenet believes that mission-critical and multimedia applications can be supported cost-economically in an intranet that spans multiple continents and regions where bandwidth is expensive only if the network provides quality of service levels that are finely optimized for the different types of traffic. One of the reasons for this is that the very significant propagation delays involved in very long-distance transmission leave very little room in some application delay budgets for network-induced delay and jitter. To meet the requirements of its global customers, Globenet adopted an aggressive market positioning toward high QoS and SLAs and elected to offer a rich QoS offering.
In particular, Globenet supports five classes of service on the access links to its Layer 3 MPLS VPN service:
VPN Voice
VPN Video
VPN Business Latency
VPN Business Throughput
VPN Standard
A separate queue is used to schedule each of these classes on the access links between CE routers and PE routers. Routing traffic and management traffic between CE routers and PE routers is handled in the VPN Business Throughput CoS.
For the Internet service, a single CoS is supported on the access that is equivalent to the VPN Standard CoS of the Layer 3 MPLS VPN service.
Globenet also supports one queue for each of the five CoSs in the core network in all regions (except in North America, where these CoSs are aggregated into three queues) for the following reasons:
Because of the relatively tight bandwidth provisioning policy in Globenet''s core network in many parts of the world
Because of long propagation delays involved in cross-continental or intercontinental transmission
Because of the resulting requirement for fine control of QoS
The queue carrying VPN Voice is optimized for real-time operations. It is called the Expedited Forwarding (EF) queue. This queue is also used to schedule the traffic from the ATM pseudowires that support trunking of ATM switches over the IP/MPLS core in North America and EMEA. When Globenet offers a virtual IP leased-line service in the future (for example, to Africa Telecom so that it, in turn, can build its VPOPs through Globenet''s network), the corresponding traffic will also be scheduled in the EF queue.
Table 5-4 details the mapping between each type of traffic, the DSCP values, the queues on the access links, the EXP/DSCP values in the core, and the queues in the core.
Class of Service | DSCP on Access | Queue on Access | EXP/DSCP in Core | Queue in Core (Except North America) | Queue in Core (North America) |
---|---|---|---|---|---|
VPN Voice | 46 | EF | EXP=5 | EF | EF |
ATM pseudowires | EXP=5 | ||||
Future: virtual IP leased line | EXP=5 | ||||
VPN Video | 34 | AF4 | EXP=4 | AF4 | AF2 |
VPN Business Latency | 26 | AF3 | EXP=3 | AF3 | |
VPN Business Throughput | 16 (in-contract 8 (out-of contract) | AF2 | EXP=2 EXP=1 | AF2 | |
Management on access | 16 | EXP=2 | |||
Routing on access | 48 | [*] | [*] | [*] | |
Control in core (routing, management, signaling) | 48 | DSCP=48 EXP=6 | AF2 | AF2 | |
VPN Standard and Internet | 0 | DF | EXP=0 | DF | DF |
[*] As per the Layer 3 MPLS VPN model, the routing traffic carried on the access links is purely between CE router and PE router.
mPE Router Ingress Policy" section in Chapter 4. It led Telecom Kingland to select DSCP values that directly map by default to the appropriate EXP scheme.
We will now review in detail the SLA offered by Globenet, the QoS designs used in various parts of the core network, and the QoS designs used on the edge to achieve the SLA commitments.
VPN and Internet SLA
Globenet offers a very rich QoS service to ensure optimum performance for its end customers. First, Globenet includes POP-to-POP SLA commitments for each of its five classes of service in the customer contract. Globenet also offers a "consultative" QoS design service that involves investigating its customers'' applications, their respective requirements to address their stated business objectives, and their operations for current infrastructure (using specialized network analysis tools). This leads to recommendations on the optimum use of Globenet''s QoS services for the customer, such as site access rates, respective ratio across the five CoSs for each site, and mapping of applications to CoS. Finally, on request, Globenet also offers site-to-site SLA commitments for each CoS.
Table 5-5 provides sample POP-to-POP commitments for each CoS. Each cell of the table lists commitments for each VPN CoS (VPN Voice, VPN Video, VPN Business Latency, VPN Business Throughput, and VPN Standard). These are specified in terms of one-way delay, jitter, and packet loss. Delay and jitter are expressed in milliseconds, and packet loss is expressed as a percentage of total transmitted packets for that CoS. (A dash [] indicates that the particular field is not applicable.)
POPs | Core Europe[1] | Sweden | U.S. East Coast[2] | U.S. West Coast[3] | Tokyo | Hong Kong | Sydney |
---|---|---|---|---|---|---|---|
Core Europe[1] | 20/10/0.1 | 30/20/0.1 | 65/20/0.2 | 90/30/0.2 | 100/30/0.2 | 100/30/0.2 | 170/40/0.2 |
30//0.1 | 40//0.1 | 80//0.2 | 105//0.2 | 120//0.2 | 120//0.2 | 190//0.2 | |
30//0.3 | 40//0.3 | 80//0.5 | 105//0.5 | 120//0.5 | 120//0.5 | 190//0.5 | |
40//0.1 | 50//0.1 | 90//0.2 | 115//0.2 | 140//0.2 | 140//0.2 | 210//0.2 | |
50//0.5 | 70//0.5 | 105//1 | 135//1 | 160//1 | 160//1 | 250//1 | |
Sweden | 30/20/0.1 | 75/30/0.2 | 100/30/0.2 | 110/30/0.2 | 110/30/0.2 | 180/40/0.2 | |
40//0.1 | 90//0.2 | 115//0.2 | 130//0.2 | 130//0.2 | 200//0.2 | ||
40//0.3 | 90//0.5 | 115//0.5 | 130//0.5 | 130//0.5 | 200//0.5 | ||
50//0.1 | 100//0.2 | 125//0.2 | 150//0.2 | 150//0.2 | 220//0.2 | ||
70//0.5 | 125//1 | 155//1 | 180//1 | 180//1 | 270//1 | ||
U.S. East Coast[2] | 65/20/0.2 | 75/30/0.2 | 25/10/0.1 | 45/20/0.1 | 110/30/0.2 | 110/30/0.2 | 180/40/0.2 |
80//0.2 | 90//0.2 | 35//0.1 | 55//0.1 | 125//0.2 | 125//0.2 | 195//0.2 | |
80//0.5 | 90//0.5 | 35//0.1 | 55//0.1 | 125//0.5 | 125//0.5 | 195//0.5 | |
90//0.2 | 100//0.2 | 35//0.1 | 55//0.1 | 145//0.2 | 145//0.2 | 215//0.2 | |
105//1 | 125//1 | 45//0.5 | 65//0.5 | 165//1 | 165//1 | 255//1 | |
U.S. West Coast[3] | 90/30/0.2 | 100/30/0.2 | 45/20/0.1 | 25/10/0.1 | 90/30/0.2 | 90/30/0.2 | 160/40/0.2 |
105//0.2 | 115//0.2 | 55//0.1 | 35//0.1 | 105//0.2 | 105//0.2 | 175//0.2 | |
105//0.5 | 115//0.5 | 55//0.1 | 35//0.1 | 105//0.5 | 105//0.5 | 175//0.5 | |
115//0.2 | 125//0.2 | 55//0.1 | 35//0.1 | 125//0.2 | 125//0.2 | 195//0.2 | |
135//1 | 155//1 | 65//0.5 | 45//0.5 | 145//1 | 145//1 | 235//1 | |
Tokyo | 100/30/0.2 | 110/30/0.2 | 110/30/0.2 | 90/30/0.2 | 40/20/0.1 | 85/30/0.2 | |
120//0.2 | 130//0.2 | 125//0.2 | 105//0.2 | 50//0.1 | 100//0.2 | ||
120//0.5 | 130//0.5 | 125//0.5 | 105//0.5 | 50//0.3 | 100//0.5 | ||
140//0.2 | 150//0.2 | 145//0.2 | 125//0.2 | 60//0.1 | 110//0.2 | ||
160//1 | 180//1 | 165//1 | 145//1 | 80//0.5 | 130//1 | ||
Hong Kong | 100/30/0.2 | 110/30/0.2 | 110/30/0.2 | 90/30/0.2 | 40/20/0.1 | 85/30/0.2 | |
120//0.2 | 130//0.2 | 125//0.2 | 105//0.2 | 50//0.1 | 100//0.2 | ||
120//0.5 | 130//0.5 | 125//0.5 | 105//0.5 | 50//0.3 | 100//0.5 | ||
140//0.2 | 150//0.2 | 145//0.2 | 125//0.2 | 60//0.1 | 110//0.2 | ||
160//1 | 180//1 | 165//1 | 145//1 | 80//0.5 | 130//1 | ||
Sydney | 170/40/0.2 | 180/40/0.2 | 180/40/0.2 | 160/40/0.2 | 85/30/0.2 | 85/30/0.2 | |
190//0.2 | 200//0.2 | 195//0.2 | 175//0.2 | 100//0.2 | 100//0.2 | ||
190//0.5 | 200//0.5 | 195//0.5 | 175//0.5 | 100//0.5 | 100//0.5 | ||
210//0.2 | 220//0.2 | 215//0.2 | 195//0.2 | 110//0.2 | 110//0.2 | ||
250//1 | 270//1 | 255//1 | 235//1 | 130//1 | 130//1 |
[1] Core Europe consists of London, Frankfurt, and Paris.
[2] U.S. East Coast consists of New York and Washington.
[3] U.S. West Coast consists of Seattle, San Jose, and Los Angeles.
In general, the commitments are given from one POP to another POP. For example, you can see from Table 5-5 that Globenet commits to a one-way delay of 85 ms, a jitter of 30 ms, and a packet loss of 0.2 percent for the VPN Voice CoS between the Tokyo POP and the Sydney POP.
However, in the cases where a few POPs are meshed at very high speed, Globenet bundles this set of POPs from an SLA viewpoint (such as the Paris, London, and Frankfurt POPs bundled as "Core Europe"). This provides the following:
SLA commitments applicable between any two POPs within this set of POPs
SLA commitments between that set of POPs and other POPs (or other sets of POPs)
This reduces the combination of POP-to-POP commitments that would have to be expressed otherwise. For example, for the Voice VPN CoS, Globenet commits to a one-way delay of 100 ms, a jitter of 30 ms, and a packet loss of 0.2 percent for traffic from any POP in Core Europe (Paris, London, or Frankfurt) to the Tokyo POP.
The SLA commitments for Internet access services are the same as the ones for the VPN Standard CoS. However, these commitments apply only to Traffic exchanged between two sites both subscribing to Globenet Internet access service (traffic that travels over the Globenet network end-to-end). traffic exchanged between one site using the Globenet Internet access service and the rest of the Internet is not within the scope of the SLA commitments. This traffic transits over other Internet backbones, which are outside Globenet''s control.
Because propagation delays are very significant when compared to the tight commitments Globenet offers, and because the level of meshing is low in some parts of the world, and because some links are low-speed so that queuing delay can be significant even with short queue occupancies, the POP-to-POP SLA commitments that Globenet can achieve are heavily dependent on some aspect of the current core network:
The link speeds Upgrading a regional link from 2 Mbps to 34 Mbps or from 34 Mbps to 155 Mbps will noticeably affect the queuing experienced by the various CoSs at the corresponding hop.
The core topology and underlying infrastructure path If a POP needs to transit via some location away from the shortest geographic route, or if it has some direct connectivity to that POP (perhaps allowed by some new submarine cable route), this affects QoS commitments because of different propagation delays (as well as the impact of the potential additional hop).
Because of this significant dependency on actual underlying core link topology and speed, Globenet reserves the right to update the SLA commitments on a monthly basis. Note, however, that such updates generally improve the commitments as a result of upgrades in the core.
When the customer requests site-to-site commitments (CE router-to-CE router), Globenet establishes them by combining the following for each CoS and for each targeted pair of sites:
The relevant POP-to-POP commitments listed in Table 5-5
The QoS performance for the relevant access links from CE router to PE router and from PE router to CE router
In turn, Globenet has determined the QoS performance on the access links for each access technology (leased line, Frame Relay, ATM, and so on) and each access speed. Globenet used a methodology similar to the one followed by Telecom Kingland to support its custom SLAs for the access links. See the section "Layer 3 MPLS VPN and Internet SLA" in Chapter 4.
Generally in accordance with the recommendations established jointly with Globenet as part of the "consultative QoS design service," the customer selects the proportion of each CoS it wants on the access link for each site. This proportion applies to both directions of the CE-PE link. The customer does not have to use the five CoSs; it may use any subset of these CoSs for a given site. In the default service offering, Globenet imposes some constraints on the selected proportion across CoSs as deemed necessary to meet the corresponding QoS objectives. For example, the VPN Voice CoS is limited to 30 percent of the access link. Similarly, the VPN Voice and VPN Video CoS are collectively limited to 60 percent of the access link. A minimum of 4 percent of access speed must be selected for the VPN Business Throughput CoS because it is used to carry routing and management traffic.
The VPN Voice CoS is designed to transport IP telephony services. It provides the very low latency, jitter, and loss required by such applications. To ensure that such QoS objectives can be met end-to-end, traffic sent into the VPN Voice CoS is strictly policed on the access against the contracted rate, and the excess is dropped.
The VPN Video CoS targets videoconferencing applications. Although such applications have requirements for controlled delay, Globenet decided to handle them as a CoS separate from the VPN Voice CoS for a number of reasons:
Video applications generally can tolerate higher levels of latency and jitter than telephony applications. Even when lip synchronization techniques are used on the destination system, an under-run of video packets (for example, packets from the video stream don''t arrive in time for replay because of a sudden delay increase in the core) is more tolerable than an under-run of audio packets.
Video applications use variable-size packets, including long packets. Carrying such long packets in the same queue as the voice traffic (which uses only short packets) would degrade the delay and jitter commitments that can be provided to voice traffic.
Video applications use higher rates than voice and transmit at variable rates. Again, handling such traffic in the same queue as the voice traffic would jeopardize the voice traffic.
The VPN Video CoS is engineered to provide controlled rate and delay. To better control the delay in the VPN Video CoS, Globenet decided to drop the traffic sent to the Video CoS on the access into the network that was in excess of the contracted rate (instead of demoting it). Hence, the end customer needs to adjust the volume of videoconferencing traffic to the contracted rate for the VPN Video CoS. This may be achieved as part of the negotiation (using protocols such as H.323 and SIP) that takes place at the beginning of a multimedia session. In addition to authorizing the user/endpoint for establishing a videoconference, the admission procedure may also check that the new video stream fits within the engineered capacity of the VPN Video CoS (which might be configured by the operator as one of the multimedia system parameters). Conversely, the end user may adjust the contracted rate for the VPN Video CoS to satisfy the expected demand for the site.
In the future, Globenet will investigate enhancing the VPN Video CoS to accept some out-of-contract traffic. This traffic would then be demoted and subject to selective random drop in the access and in the core (using Weighted Random Early Detection (WRED)). This may play well with some videoconferencing end systems that can dynamically adjust their encoding rate based on the experienced packet loss. (For example, a loss of 1 percent results in the end system''s reverting to the next-lower encoding rate.) However, this makes it more difficult to control the delay experienced by in-contract video traffic. The fraction of out-of-contract traffic that is not dropped by WRED goes into the same queue as the in-contract traffic and hence increases its delay and jitter. Note that scheduling the out-of-contract VPN video traffic in a different queue than the in-contract VPN video traffic generally is not an acceptable alternative because it would result in permanent reordering of packets in the video stream.
The VPN Business Latency CoS addresses requirements of responsive applications, typically client/server-based, with fairly low throughput but a requirement for controlled response time. A typical example is a mission-critical interactive application (such as a reservation or ordering system) in which a user clicks (or enters carriage returns) and then waits for the server response. Systems Network Architecture (SNA) terminal-to-host transactions; Enterprise Resource Planning applications such as SAP, Oracle, PeopleSoft, and Siebel; and financial wire transfers and credit card transactions are examples of applications that may be served by the VPN Business Latency CoS.
Traffic sent to the VPN Business Latency CoS is policed on the access against the contracted rate, and the excess is dropped. Because the corresponding applications generally do not transmit at a high rate, it is typically fairly easy to select a reasonable contracted rate for the CoS that is never exceeded in practice by actual traffic. Still, as with the VPN Video CoS, Globenet will investigate in the future the opportunity to enhance the VPN Business Latency CoS to accept some out-of-contract traffic. However, an issue that Globenet is potentially facing for support of out-of-contract VPN video, out-of-contract VPN latency, and other additional QoS services is the shortage of EXP values. Globenet already uses seven of the eight available values.
The VPN Business Throughput CoS is configured to satisfy applications that require high throughput. More specifically, it optimizes throughput for long-lived TCP flows. Store and forward, file transfer, Lotus Notes, and Microsoft Exchange are examples of applications that can operate well over the VPN Business Throughput CoS. Although reasonable delay is provided for this CoS, the prime objective is to offer low loss to the in-contract traffic because this is what drives the throughput actually achieved by TCP. Out-of-contract traffic is accepted in the VPN Business Throughput CoS on the access but is demoted (in other words, marked differently) and is subject to more aggressive selective random drop than the in-contract traffic. Although accepting out-of-contract traffic tends to increase the latency for the in-contract traffic (which is not so important for the targeted applications), it allows smoother adjustment of TCP streams to the available bandwidth and ultimately better overall throughput.
The VPN Standard CoS is used for all the applications not classified into the other CoSs and that can be appropriately served with reasonable latency and loss commitments.
QoS Design in the Core Network in the EMEA, AsiaPac, and South America Regions
Globenet manages five separate queues in the core for the following reasons:
Because of constrained bandwidth in the core, particularly in the Asia Pacific and South America regions
Because of some relatively low-speed links that involve higher delay and jitter for a given queue occupancy
Because of high-propagation delays of long-distance links
Because of the need for very fine optimization to meet the tight SLA requirements presented earlier
Figure 5-30 illustrates the mapping of CoSs into these five queues.
Figure 5-30. Usage of the Five Queues in EMEA, AsiaPac, and South America
Chapter 2, "Technology Primer: Quality of Service, Traffic Engineering, and Network Recovery") to transport the EF traffic over separate tunnels that are "constraint-based routed" to keep the EF traffic load under 30 percent of link speed on any link. We call these tunnels the EF tunnels. This stringent 30 percent bandwidth constraint is deemed appropriate by Globenet to bound the delay, jitter, and loss through the core to the levels required by traffic transported in the EF queue (such as ATM pseudowire as well as VPN voice).
Separate capacity planning for EF traffic and validation through a network simulation tool that the network has enough capacity so that the EF load remains below 20 percent of link capacity in normal operation (and hence is routed by TE along its shortest path. Also, validation that TE can route the EF load within the 30 percent limit (on shortest path or non-shortest path) during single-failure conditions.
Conditional policing to 40 percent of the link rate. Although Globenet does not expect TE to route more than 30 percent of EF traffic, even in (single) failure situations, the conditional policer is configured to 40 percent (instead of 30 percent). This is to ensure that EF traffic is not dropped unnecessarily in transient periods where the actual load can MPLS Traffic Engineering Design" section). They also can occur during the interim period where tunnels carrying EF traffic have been fast rerouted because of a failure and have not yet been rerouted by their headend and thus have not yet been subjected to proper TE admission control. Clearly, Globenet expects the actual EF load to always be below 40 percent on core links. Therefore, it does not expect this conditional policer to come into action. It is configured as a safety precaution, in case of extraordinary unplanned situations, in order to prevent the EF traffic from hogging most, or all, of the bandwidth and thus deteriorating the QoS of other classes (such as the VPN Video, VPN Business Latency or VPN Business Throughput) or even potentially affecting network stability in case the routing and Control traffic can no longer be transported appropriately.
For the rest of the traffic (which we call the "non-EF traffic"), Globenet combines the following mechanisms:
Use of a separate DiffServ queue for each CoS to ensure isolation and appropriate levels of QoS
Use of DS-TE to transport all the non-EF traffic together (but separately from the EF traffic) over tunnels that are "constraint-based routed." The sum of all tunnels admitted on a link is limited to 100 percent of link capacity. (This is adjusted by an overbooking factor on higher-speed links, as discussed in detail in the section "MPLS Traffic Engineering Design.") We call these tunnels the non-EF tunnels.
Aggregate capacity planning across EF and non-EF traffic and validation through a network simulation tool that the network has enough capacity so that the following are true:
- Both EF and non-EF traffic is routed on its shortest path by DS-TE and the total aggregate load remains below 80 percent of link capacity in the absence of failure. (Sometimes the aggregate load can reach 100 percent of the link even in the absence of failure, resulting in a small percentage of tunnels being routed on their non-shortest path even in the absence of failure.)
- Both EF and non-EF traffic can be routed (on its shortest path or on a non-shortest path) by DS-TE within the bandwidth limits just specified (30 percent for EF tunnels and 100 percent plus overbooking for non-EF tunnels) during single-failure conditions.
Each CoS is allocated individual scheduling parameters based on its respective QoS requirements and expected traffic load.
With this approach, you can observe the following:
Under normal conditions, Globenet operates below congestion, but possibly at fairly high utilization at peak time, such as 80 percent link utilization (or even nearing 100 percent for the few exceptions just mentioned). Operating at high utilization while maintaining Network Recovery Design" section later in this chapter.
As detailed later, the default (DF) queue (carrying VPN Standard CoS and Internet traffic) is allocated only a small fraction of the bandwidth. This is to ensure that this CoS will suffer most during potential congestion periods, hence protecting all the other more-important CoSs.
You see that this approach, which handles the EF traffic from all the other CoSs differently, clearly calls for differentiated admission control of EF tunnels and non-EF tunnels, whereby
The EF tunnels are limited to some EF-specific engineered levels (30 percent in the case of Globenet).
All the tunnels (the non-EF tunnels and the EF tunnels) are collectively limited to some aggregate engineered levels (100 percent plus overbooking in the case of Globenet).
This matches perfectly with the Russian Dolls Model (RDM) of DS-TE (discussed in Chapter 2), which limits Class Type 1 to Bandwidth Constraint BC1 and then limits Class Type 0 (CT0) and Class Type 1 (CT1) together to Bandwidth Constraint BC0.
RDM also allows Globenet to achieve maximum sharing of bandwidth across EF tunnels and non-EF tunnels. If the EF tunnels currently are not reserving their full 30 percent, whatever is left over can effectively be reserved by the non-EF tunnels so that the link can be used up to 100 percent (plus overbooking). This avoids any capacity wastage.
Moreover, by using a higher TE preemption priority for the EF tunnels (CT1) than for the non-EF tunnels (CT0), the EF tunnels will always be able to reserve up to their full BC1 bandwidth should they need it, no matter how many non-EF tunnels have been established before (or will need to be established in the future). In other words, by using preemption priorities in conjunction with the RDM, Globenet can fully protect the EF tunnels from bandwidth starvation even if the EF tunnels (CT1) share a common bandwidth constraint BC0 with the non-EF tunnels (CT0). This is sometimes referred to as achieving "isolation" across Class Types.
Hence, Globenet elected to use the RDM with
EF tunnels belonging to Class Type 1 and using a higher preemption priority
Non-EF tunnels belonging to Class Type 0 and using a lower preemption priority
Another benefit of using the RDM with such a preemption policy is that the control plane bandwidth allocation of DS-TE matches very accurately DiffServ''s data plane bandwidth allocation. DS-TE always allows the EF tunnels to reserve as much bandwidth as they need (up to their own bandwidth constraint, BC1=30 percent). The scheduler always grants the corresponding EF packets as much bandwidth as they need (up to the conditional policing rate) because they are scheduled in a strict priority queue. The non-EF tunnels always can reserve all the bandwidth from BC0 left unused by the EF tunnels. The scheduler effectively gives the queues carrying the non-EF packets (AF4, AF3, AF2, and DF queues) all the physical link bandwidth left unused by the EF queue. This ensures that no matter what proportion of EF tunnels and non-EF tunnels is currently established, the corresponding EF and non-EF traffic actually receives the corresponding proportion of scheduling resources. Thus, both types of traffic are protected from QoS degradation.
Figure 5-31 shows how the CoSs are mapped onto tunnels from Class Types CT0 and CT1. This figure also illustrates the strong separation of the control plane (path selection) and the data plane (scheduling) in DiffServ-aware MPLS TE. You can see that the path a packet follows is exclusively controlled by the TE tunnel into which it is encapsulated, while the scheduling of that packet is controlled by the packet CoS marking (MPLS EXP bits). For example, while an Internet packet and a VPN video packet going beyond P router P2 are encapsulated into the same non-EF tunnel (and hence follow the exact same path through the core), these packets are scheduled into different queues at every hopnamely, the DF queue and the AF4 queue. Conversely, while a VPN video packet going beyond P router P2 and a VPN video packet going beyond P router P3 are encapsulated into different non-EF tunnels, on the considered hop (P router P1), these packets are scheduled in the same AF4 queue.
Figure 5-31. Usage of Five Queues and Two Class Types in EMEA, AsiaPac, and South America
Chapter 2). But, in Globenet''s environment, this would lead to either some bandwidth wastage or some unacceptable congestion risks. For example, if the EF tunnels were limited to 30 percent of link capacity, and the non-EF tunnels were limited independently to 70 percent of link capacity, clearly the aggregate load would be kept below 100 percent so that congestion would be prevented. However, if the EF tunnels currently have only 10 percent of link capacity reserved, the non-EF tunnels would not be able to reserve more than their 70 percent, unnecessarily leaving 20 percent of link capacity wasted. Conversely, if the EF tunnels were limited to 30 percent of link capacity and the non-EF tunnels to 90 percent of link capacity, clearly the non-EF tunnels could reserve up to 90 percent so that no capacity is wasted when the EF tunnels actually use only 10 percent. However, the non-EF tunnels could also establish up to 90 percent of link capacity even if the EF tunnels have indeed reserved their full 30 percent. This would result in an aggregate load of 120 percent on the link and a level of congestion that is unacceptable to Globenet. Hence, it rejected a model with independent bandwidth constraints.
In the future, Globenet may investigate the use of a third Class Type with the RDM. For example, this may be used to create a third mesh of TE tunnels to carry separately the interactive traffic (VPN Video CoS and VPN Business Latency CoS), which also has some delay constraints but less stringent than voice VPN CoS. The tunnels in that third mesh are called the interactive tunnels. In that case, RDM would be used to
Limit the EF tunnels to 30 percent
Limit the EF tunnels and interactive tunnels together to a limit specifically engineered for the VPN Video and VPN Business Latency traffic (say 50 to 60 percent)
Limit the EF tunnels, interactive tunnels, and non-EF-noninteractive tunnels together to 100 percent (plus overbooking)
Globenet uses a number of satellite links in South America. They are attractive to Globenet because they result in additional bandwidth at lower cost and allow easier long-distance connections within the region because their cost is distance-independent. However, these links involve a propagation delay on the order of 300 ms (the amount of time the signal takes to travel to the satellite and back to Earth). This makes them unsuitable to transport voice traffic. This is another important application of DS-TE for Globenet. In South America, it uses DS-TE to make sure that the Voice VPN CoS is never routed onto satellite links.
MPLS Traffic Engineering Design" section.
Cisco''s Modular QoS CLI (MQC) supports independent control of three scheduling attributes for each queue:
Minimum bandwidth This attribute defines the minimum bandwidth that is guaranteed to the queue by the scheduler.
Excess bandwidth This attribute defines how to allocate excess bandwidth to a queue beyond its minimum bandwidth (which may be 0 in cases where a minimum bandwidth is not also configured). The excess bandwidth is expressed as a percentage of the bandwidth not allocated to any queue (or allocated to a queue but currently left unused).
Priority This attribute specifies that any offered load in this queue is to be serviced ahead of all other queues (up to the optionally configured policing bandwidth on the priority queue).
In a very similar way, Globenet felt that some of its CoSs required absolute allocation of bandwidth. (The amount of bandwidth allocated to the queue must reflect very closely the expected peak traffic based on the contracted rates.) This was necessary to meet the delay/jitter/bandwidth requirements associated with the CoSs:
EF traffic (VPN Voice as well as ATM pseudowires and virtual IP leased line in the future)
AF4 traffic (VPN Video)
AF3 traffic (VPN Business Latency)
The other CoSs needed only relative bandwidth allocation (the amount of bandwidth allocated to the queue must primarily reflect a relative level of service versus some other classes):
AF2 traffic (VPN Business Throughput)
DF traffic (VPN Standard as well as Internet)
Consequently, on the queues corresponding to the CoSs that need only relative bandwidth allocation, Globenet elected not to configure a minimum bandwidth. Instead, it configured an excess bandwidth. This way, Globenet would not need to modify the configuration for these queues if it decided to modify the minimum bandwidth of a CoS requiring absolute bandwidth allocation. This decision also would be advantageous if Globenet decided, in the future, to introduce an additional CoS with its own absolute bandwidth allocation requirement.
Example 5-10 illustrates the core QoS egress service policy for an OC-3 Packet over SONET (PoS) link. Globenet configured the following:
A conditional policing bandwidth of 40 percent of link bandwidth on the EF queue (which is configured with the priority attribute).
A minimum bandwidth on the AF4 queue of 20 percent of link bandwidth.
A minimum bandwidth on the AF3 queue of 5 percent of link bandwidth.
An excess bandwidth on the AF2 queue of 83 percent of the remaining bandwidth.
An excess bandwidth on the DF queue of 17 percent of the remaining bandwidth. Globenet selected the percentage values of 83 percent and 17 percent to allocate roughly five times more excess bandwidth to the AF2 queue than to the DF queue.
This bandwidth allocation is illustrated in Figure 5-32.
Figure 5-32. Bandwidth Allocation in the Core in EMEA, Asia Pacific, and South America

Example 5-10. Core QoS Egress Service Policy on an OC-3 Link in EMEA and AsiaPac
!
class-map match-any class-RealTime
match mpls exp 5
class-map match-any class-Video
match mpls exp 4
class-map match-any class-Latency
match mpls exp 3
class-map match-any class-Throughput
match mpls exp 2
match mpls exp 1
match dscp 48
match mpls exp 6
!
policy-map Core-QoS-OC3-policy
class class-RealTime
priority percent 40
queue-limit 3060 packets
class class-Video
bandwidth percent 20
queue-limit 3875 packets
class class-Latency
bandwidth percent 5
queue-limit 3875 packets
class class-Throughput
bandwidth remaining percent 83
random-detect precedence-based
random-detect exponential-weighting-constant 9
random-detect precedence 6 214 1425 1
random-detect precedence 2 214 1425 1
random-detect precedence 1 72 214 1
queue-limit 3875 packets
class class-default
bandwidth remaining percent 17
random-detect
random-detect exponential-weighting-constant 7
random-detect 45 298 1
queue-limit 3875 packets
!
int pos0/0
service-policy output Core-QoS-OC3-policy
Because the VPN Business Throughput CoS is expected to carry a majority of TCP traffic, Random Early Detection (RED) is applied to the AF2 queue for optimum interaction with TCP flow-control mechanisms.
Globenet decided to handle control traffic, as well as management traffic, in the same queue as the VPN Business Throughput CoS (AF2 queue) because it offers appropriate transport commitments. The control traffic is identified based on the following:
DSCP value 48 (which corresponds to precedence 6). Cisco routers automatically set the DSCP to this value when generating routing packets (OSPF, BGP) as well as other essential control traffic (LDP, RSVP-TE, Telnet, and so on).
EXP value 6 for routing packets that are MPLS encapsulated over the core, such as BGP packets.
The management traffic originated by the Management System toward any network element is marked with DSCP=16 on the network management system side. Hence, it is marked with the appropriate EXP=2 value when encapsulated in MPLS and can be classified based on this criterion. For the traffic originated by the P router or PE router, the device must be configured to set the DSCP to the same DSCP=16 value (which then gets mapped to EXP=2). Cisco IOS supports the concept of local policy. This allows classification and marking to be applied to locally generated traffic (while usual QoS service policies are applied to logical or physical interfaces). As shown in Example 5-11, Globenet uses such a local policy to identify traffic going to the management system and marks it with DSCP=16.
Example 5-11. Core QoS Local Policy Template for Marking Management Traffic in EMEA, AsiaPac, and South America
!
!local route map (applies on locally generated traffic)
ip local policy route-map LocalTraffic
!
!identifies Management Traffic
Access-list 101 permit ip host loopback-address management-subnet mask
!
route-map LocalTraffic permit 10
match ip address 101
set ip dscp 16
!
Note
Globenet uses IS-IS as its IGP in the core. Because IS-IS is not encapsulated in IP, the DiffServ mechanisms cannot be directly applied for preferential treatment of IS-IS packets, as is done for BGP, for example (or OSPF when it is used). For example, IS-IS traffic obviously is not captured by the classification criteria (match on IP DSCP=48 and MPLS EXP=6) used to classify routing traffic such as OSPF and BGP. To protect the IS-IS traffic, Globenet takes advantage of mechanisms supported by its routers specifically for locally generated traffic. For example, locally generated traffic identified as essential (such as some IS-IS messages) can bypass any dropping mechanism on egress. In some cases it can be scheduled into a dedicated queue, which operates in parallel to the DiffServ queues, and with a minimum bandwidth allocated to it to provide appropriate protection to that traffic.
Globenet needs to ensure that the out-of-contract traffic (accepted in the VPN Business Throughput CoS beyond the contracted rate) cannot steal significant resources in case of congestion in the AF2 queue. Therefore, it uses WRED inside the AF2 queue. More precisely, Globenet elected to
Apply a regular RED random drop profile to the important traffic (VPN Business Throughput in-contract as well as control and management traffic)
Apply a much more aggressive drop profile to the VPN Business Throughput out-of-contract traffic
To configure the RED regular drop profile, Globenet used the same formulas as Telecom Kingland that were presented in the "QoS Design in the Core Network" section in Chapter 4. Hence, for the important traffic, Globenet computed the RED parameters in the following way:
The exponential weighting constant n is such that
2n = 10 / B, where B = queue bandwidth / (MTU * 8)
(with MTU = 1500 bytes)
The minimum and maximum thresholds are set to 15 percent and 100 percent of the pipe size, respectively, where
pipe size = RTT * queue bandwidth / (MTU * 8)
The maximum drop probability is set to 1.
The AF2 queue is allocated 83 percent of the remaining bandwidth. Consider the following:
The EF queue is expected to carry at most 30 percent of link capacity in a steady situation. DS-TE is configured to route up to only 30 percent worth of EF traffic on any link (despite the fact that the conditional policer is configured to 40 percent of link capacity to cope with transient situations).
The AF4 queue is allocated 20 percent of link capacity.
The AF3 queue is allocated 5 percent of link capacity.
Thus, the normal service rate of the AF2 queue taken into account by Globenet for WRED fine-tuning is 83 percent of (10030205) percent, which is about 37 percent of link bandwidth. On an OC-3 link, this means the queue bandwidth of the AF2 queue to be taken into account for RED fine-tuning is 57 Mbps. With this queue bandwidth and assuming a round-trip time (RTT) of 300 ms, these formulas yield the following values, which appear in Example 5-10:
An exponential weighting constant of 9
A minimum threshold of 214
A maximum threshold of 1425
For the VPN Business Throughput out-of-contract, Globenet elected to apply a maximum threshold that equals the minimum threshold of the in-contract traffic. Therefore, out-of-contract traffic is discarded very aggressively (if needed, to the point where 100 percent of the out-of-contract packets get dropped) before the in-contract traffic has to enter its own random drop mode. The minimum threshold is set to a third of the maximum threshold, which is 72. The maximum drop probability is also set to 1.
Figure 5-33 illustrates these WRED drop profiles for the AF2 queue.
Figure 5-33. WRED Drop Profiles in the AF2 Queue in the Core
[View full size image]

Note
The WRED profile in the class-Throughput class is configured in Example 5-10 with the keyword precedence-based. This syntax indicates that WRED applies to the EXP field of MPLS packets (in addition, of course, to the Precedence field of IP packets).
Globenet also activated RED in the DF queue to smooth adjustment of TCP flows (which are expected to be dominant in that queue because it carries Internet traffic as well as VPN Standard) to the available capacity in that queue. For fine-tuning of RED in the DF queue, Globenet also used the formulas detailed previously for regular RED drop profile. Because the DF queue is allocated 17 percent of the remaining bandwidth, the normal service rate of the DF queue taken into account by Globenet for RED fine-tuning is 17 percent of (10030205) percent, which is 7.65 percent of link bandwidth. On an OC-3 link, this means a queue bandwidth of about 11.9 Mbps. This results in
An exponential weighting constant of 7
A minimum threshold of 45
A maximum threshold of 298
A maximum drop probability of 1
Globenet did not activate RED in the VPN Voice, VPN Video, or VPN Business Latency CoSs because those are not expected to carry a dominant proportion of TCP, or TCP-like, elastic traffic.
Finally, like Telecom Kingland, Globenet decided to place a limit on the instantaneous queue size for each of the three queues. This avoids unexpected hogging of buffers by one queue and places a hard bound on absolute worst delay and jitter through that hop.
For the EF queue, the queue limit is configured so that it corresponds to an absolute worst queuing delay through that hop of 30 ms for the real-time traffic. On an OC-3 link where the Layer 3 MPLS VPN Service Design," the Internet traffic normally is label-switched through the Globenet core. However, it could be forwarded natively as IP traffic in some exceptional situations where MPLS connectivity between PE routers is temporarily lost. In this case, the Internet traffic is still scheduled in the DF queue, even without any explicit classification on the DSCP of 0 in the QoS egress policy, because this traffic is naturally captured by the class default.
Globenet applies a core QoS egress service policy such as the one described in Example 5-10 on all core-facing interfaces of PE routers as well as on all P router interfaces. Because a few parameters, such as WRED fine-tuning and maximum instantaneous queue size, depend on the interface bandwidth, a different service policy is created for every interface type (OC-3, OC-48, and so on). The rest of the service policy (including definition of classes and scheduling configuration) is the same, independent of the interface bandwidth.
QoS Design in the Core Network on ATM PVCs
In the Asia-Pacific region, Globenet uses ATM PVCs supported on its own ATM infrastructure to interconnect some of its P routers and P/PE routers. Several models are conceivable for QoS interworking between the IP/MPLS layer and the ATM layer. Globenet selected a simple QoS interworking model whereby each ATM PVC is seen and used by the IP/MPLS layer exactly as if it were a point-to-point link, only it can be of any arbitrary speed:
At the IP/MPLS layer, the DiffServ mechanisms are applied independently over each ATM PVC exactly as if it were a point-to-point link.
At the ATM layer, to make sure that the ATM VC is closely emulating a point-to-point link, Globenet uses a number of techniques. First, on the ATM switches, Globenet provisions the ATM PVC with an ATM traffic class of VBR-rt and with a Sustainable Cell Rate (SCR) equal to the targeted IP bandwidth (taking into account the ATM cell header and cell packing overhead). The VBR-rt ATM traffic class ensures that ATM cells are switched through the ATM infrastructure with very low delay and jitter. The provisioned SCR ensures that, as long as the router transmits at the SCR rate or below, enough resources will be reserved in the ATM infrastructure to carry the offered traffic on the PVC with negligible cell loss. Finally, on the routers, Globenet activated ATM-level traffic shaping so that traffic is shaped in accordance with the provisioned SCR. Three configurable parameters control the detailed behavior of each per-VC ATM shaper on Globenet routers: SCR, Peak Cell Rate (PCR), and a committed burst (which defines the size of the burst that the shaper can transmit in excess of SCR but at, or below, PCR). Globenet configured the ATM shapers with a PCR and an SCR, both equal to the SCR provisioned on the ATM PVC and with a very small burst. This ensures that routers shape traffic very smoothly against the ATM PVC SCR and that absolutely all the cells transmitted are within the ATM PVC traffic contract. Hence, all the transported IP/MPLS traffic is guaranteed to experience very low delay/jitter and negligible loss.
With this model, the congestion problem is effectively completely pushed out of the ATM layer (which provides a fixed-rate pipe). It is dealt with entirely at the IP layer, which is responsible for adapting the aggregate IP rate to the fixed rate supported by the ATM PVC. This is very attractive to Globenet because it provides a very simple operational demarcation point between the IP/MPLS layer and the ATM layer. Also, congestion can be dealt with very selectively in the IP/MPLS layer because it has full awareness of the CoSs. Finally, this model allows Globenet to apply virtually the same IP/MPLS QoS service policies over ATM PVCs and hence have a consistent QoS approach regardless of the underlying transport layer.
Globenet elected the VBR-rt traffic class instead of CBR because it offers delay/jitter and loss levels that are perfectly satisfactory for the IP/MPLS traffic (including the demanding VPN Voice CoS) while monopolizing fewer resources on the ATM network.
Globenet also considered a more complex QoS interworking model involving a variation of the VBR traffic class where the ATM switches accept ATM traffic with some level of burstiness. For example, Globenet considered allowing the router to send some traffic in excess of the SCR router and marking the less-important IP traffic (VPN Business Throughput out-of-contract, VPN Standard, Internet) as CLP=1 (with the Cell Loss Priority bit set). This had the potential benefit of effectively allowing the IP/MPLS traffic to take more advantage of statistical gains inside the ATM network and make use of excess capacity on a best-effort basis while ensuring that the less-important traffic is dropped first by the ATM switches (through the CLP bit) in case of congestion inside the ATM network. However, Globenet considered risks of QoS degradation for the important CoSs. For example, in case there is currently only a very small rate of less-important IP/MPLS traffic, the important traffic would effectively be allowed to burst beyond the SCR. Then it would potentially be subject to remarking of the CLP bit and eventually to discard in case of temporary congestion within the ATM network. Also, the operational demarcation point is less clear between the IP/MPLS layer and ATM layer and would make troubleshooting in case of unexpected QoS degradation more difficult. Finally, Globenet generally expects discard at the IP/MPLS layer to interact more smoothly with elastic IP traffic than discard at the ATM layer. The main reason for this is that random discard mechanisms such as RED/WRED applied at the IP/MPLS layer are specifically designed and are fine-tuned to optimize their interaction with transport protocols'' congestion control mechanisms.
To implement the simple QoS interworking model selected by Globenet, the egress QoS policy applied on ATM interfaces involves the following:
Defining a QoS service policy that is the same as the one used on other types of links, only with fine-tuning of the rate-dependent parameters (RED/WRED profiles and queue limits) according to the range of PVC rates.
For each ATM PVC, activation of per-VC ATM traffic shaping at that VC''s SCR.
For each ATM PVC, application of the QoS service policy at the VC level. This relies on the router support of per-VC queuing whereby a logical separate scheduler effectively runs independently for each ATM VC and schedules the packets according to the service policies and supply packets to the ATM per-VC shaper. Operation of such per-VC queuing and per-VC shaping is illustrated in Figure 5-34.
Figure 5-34. Per-VC Queuing on ATM PVCs in the Globenet Core
[View full size image]

A short first-in, first-out (FIFO) buffer (called the Tx-ring on a Cisco router) is used on Globenet routers to hand over to the transmission logic the packets selected by the scheduler. As discussed in the "CE Router Egress Policy" section in Chapter 4, this Tx-ring may introduce a small additional delay/jitter component that applies indiscriminately to any traffic (including the VPN Voice traffic in that case). Although this is always negligible on high-speed links, it can be noticeable on lower-speed links if the Tx-ring size is too large. Hence, Globenet felt that fine-tuning of the Tx-ring size was justified on ATM PVCs considering that optimum delay/jitter is sought for the VPN Voice CoS. In the context of ATM, there is a separate logical Tx-ring buffer for each PVC that controls how packets from the multiple VCs are handed over to the ATM Segmentation and Reassembly (SAR) logic and that enforces isolation across VCs. This per-VC logical Tx-ring is illustrated in Figure 5-34. Its fine-tuning obeys the same trade-offs as over the point-to-point link. The smaller the Tx-ring size, the smaller the introduced delay/jitter. However, the Tx-ring size must not be too small, because this could result in under-run of the Tx-ring buffer and an inability to achieve the targeted ATM rate. For example, on an ATM PVC with a rate of 20 Mbps, assuming a 1500-byte MTU, a Tx-ring size of four packets as selected by Globenet results in a worst-case delay/jitter contribution of 2.4 ms.
A QoS egress policy applied on an ATM PVC in the Globenet core is provided in Example 5-12.
Example 5-12. Core Egress QoS Service Policy Template on ATM PVC in EMEA and AsiaPac
!
class-map match-any class-RealTime
match mpls exp 5
class-map match-any class-Video
match mpls exp 4
class-map match-any class-Latency
match mpls exp 3
class-map match-any class-Throughput
match mpls exp 2
match mpls exp 1
match dscp 48
match mpls exp 6
!
policy-map Core-QoS-ATM-policy
class class-RealTime
priority percent 40
queue-limit 395 packets
class class-Video
bandwidth percent 20
queue-limit 500 packets
class class-Latency
bandwidth percent 5
queue-limit 500 packets
class class-Throughput
bandwidth remaining percent 83
random-detect precedence-based
random-detect exponential-weighting-constant 9
random-detect precedence 6 28 184 1
random-detect precedence 2 28 184 1
random-detect precedence 1 10 28 1
queue-limit 500 packets
class class-default
bandwidth remaining percent 17
random-detect
random-detect exponential-weighting-constant 7
random-detect 6 39 1
queue-limit 500 packets
!
vc-class atm Core-20Mb
vbr-rt 24000 24000 10
oam-pvc manage
encapsulation aal5snap
!
interface ATM8/0/0.1 point-to-point
ip address interface-prefix mask
pvc Singapore-to-NewDeli 0/112
class-vc Core-20Mb
tx-ring-limit 4
service-policy out Core-QoS-ATM-policy
!
Setting the Maximum Reservable Bandwidth on Each Link," Globenet systematically took into account the lower layers'' overhead when configuring the reservable bandwidth on a link, should it be a PoS link with PPP, ATM PVCs, and so on.
QoS Design in the Core Network in North America
Because high-speed links are more readily available and have a much lower cost in the North America region, Globenet deployed a simpler and coarser-grain QoS design in North America than in other regions.
First, Globenet decided to aggregate several CoSs in the core and hence to manage only three queues. Figure 5-35 illustrates the mapping of CoSs into these three queues in North America.
Figure 5-35. Usage of the Three Core Queues in North America
[View full size image]

Secondly, Globenet elected to simply rely on capacity planning with some level of overengineering to ensure that adequate service rate is granted by each queue to its transported traffic to meet its respective QoS requirements. Hence, neither MPLS DiffServ-aware TE nor regular MPLS TE is used in North America to perform constraint-based routing or admission control of traffic.
As in the other regions, Globenet uses a strict priority queue as the EF queue to offer optimum delay and jitter to the EF traffic. It also applies a conditional policer to 40 percent of link bandwidth as a safety measure to protect the rest of the traffic. In North America, the AF2 queue is used for traffic scheduled in the AF2, AF3, and AF4 queues in other regions. Therefore, it needs to be allocated a higher proportion of the remaining bandwidth than in other regions. Conversely, the DF queue, which carries the same CoSs as in other regions, needs to be allocated a smaller share of the remaining bandwidth. Thus, Globenet allocated 89 percent of the remaining bandwidth to the AF2 queue and 11 percent of the remaining bandwidth to the DF queue. Globenet selected these relative allocations to ensure the same relative share of the bandwidth to the DF queue as in other regions:
In North America, assuming a maximum sustained load in the EF queue of 30 percent, the remaining bandwidth is 70 percent. 11 percent of this 70 percent represents 7.7 percent of the link bandwidth.
In other regions, also assuming a maximum sustained load in the EF queue of 30 percent, and because the AF4 and AF3 queues are allocated 20 percent and 5 percent of the link bandwidth, respectively, the remaining bandwidth is 45 percent. The 17 percent of this 45 percent remaining bandwidth represents 7.65 percent of the link bandwidth.
Similarly, the AF2 queue receives the same relative share of bandwidth in North America as the AF2, AF3, and AF4 queues collectively receive in other regions.
In addition to granting a different share of the link capacity to each queue, Globenet enforces an aggregate capacity planning policy to trigger provisioning of additional link capacity whenever
The aggregate load across all traffic reaches 55 percent of the link capacity, in the absence of failure, as determined by the monitoring of interface counters.
or
The aggregate load across all traffic would reach 90 percent of the link capacity, should one of the links or nodes fail, as determined by a centralized simulation tool collecting current network topology, estimating traffic matrix, and assessing the theoretical load on all links resulting from any single-failure situations.
QoS Design in the Core Network Across Regions
As explained in the "Layer 3 MPLS VPN Service Design" section, inter-AS option B is used for Layer 3 MPLS VPN operations across regions of the Globenet network. Hence, the VPN packets are label-switched by the ASBRs and are encapsulated with an MPLS header on the links between ASBRs. By default, the label-switching behavior in the IOS implementation is to copy the received EXP value in the transmitted MPLS header. This means that all the VPN packets transmitted on the link between ASBRs naturally have their EXP field set according to Globenet policy.
Thus, Globenet simply applies similar QoS egress policies on the links between ASBRs across regions, as it does on core links in a region. On the OC-48 link between North America and Europe, a QoS egress policy with three queues (as used within North America) is applied at both ends of the link. On all other links, which are OC-3 or E3, QoS egress policies with five queues (as used within EMEA, AsiaPac, and South America) are applied.
As explained in the "Layer 3 MPLS VPN Service Design" section, traffic going from an Internet CE router to a destination that is in another region of the world and that is not also attached to Globenet''s network exits the Globenet network in the ingress region because each region has its own local Internet peering point(s). This means that traffic going to the rest of the Internet never travels over the interregion links. The only Internet traffic Globenet carries over the interregion links is the traffic directly exchanged between two Internet CE routers attached to Globenet in different regions. The proportion of Internet traffic on the interregion links is then somewhat smaller than within each region. For this reason, Globenet allocates a smaller proportion of the remaining bandwidth to the DF queue on these links than within the regions.
More importantly, it allocates a larger proportion of the remaining bandwidth to the AF2 queue to maximize the QoS of the corresponding traffic on these constrained links.
The Internet traffic is carried in native IP packets (non-MPLS encapsulated) on the interregion links. However, no additional classification configuration is required in the egress QoS policies applied on the interregion links. The traffic going into the DF queue is classified using the concept of class default, which captures not only MPLS packets marked with EXP=0 but also IP packets marked with DSCP=0 because those are not explicitly classified into the other queues.
QoS Design on the Network Edge for Layer 3 MPLS VPN and Internet
The QoS design on the edge of the Globenet network is made up of QoS mechanisms on the CE routers (when managed by Globenet) and on the user-facing interfaces on PE routers.
CE Router Egress Policy
The key elements of the CE router egress policy are the same as those discussed for Telecom Kingland in the "CE Router Egress Policy" section in Chapter 4. In particular, assuming again Frame Relay access, the same hierarchy applies across
The physical interface bandwidth
The Committed Information Rate (CIR) enforced via Frame Relay traffic shaping
The bandwidth allocated to each queue by the scheduler operating over the Frame Relay traffic shaping
Globenet also uses fragmentation and interleaving (FRF.12 in the case of Frame Relay) as well as fine-tuning of the Cisco Tx-ring to optimize delay and jitter for real-time traffic on low-speed accesses. Globenet also configures its egress QoS policy so that the policing actions enforced for each CoS do not apply to the Service Assurance Agent (SAA) sample traffic that is used to measure performance for that CoS.
Of course, one difference with the design of Telecom Kingland is that Globenet supports five CoSs instead of three. Another difference is that Globenet handles routing traffic and management traffic on the access link in the same queue as the Layer 3 MPLS VPN Business Throughput CoS instead of handling it in a dedicated user-hidden queue. To that end, as in the core, Globenet uses a local policy to set the DSCP of locally generated traffic destined for the Network Management System to the DSCP=16 value of the Layer 3 MPLS VPN Throughput CoS.
To further protect routing and management traffic, Globenet excludes this traffic from the scope of the policing applied to the rest of the VPN Business Throughput in the same manner as it excludes SAA traffic. This is achieved by applying policing through a child policy whose class explicitly excludes the routing and management traffic in addition to the SAA traffic. Otherwise, in case of high load in the Layer 3 MPLS VPN Business CoS, some routing and management traffic could be remarked as out-of-contract and then subject to aggressive WRED discard by the CE router or further downstream in the network.
Example 5-13 details a QoS service policy applied on a CE router. The customer contracted a 512-kbps CIR on a Frame Relay access and elected to allocate 25 percent of the CIR to the VPN Voice CoS, 10 percent to the VPN Latency CoS, and 50 percent to the VPN Throughput CoS. This customer does not use the VPN Video CoS.
Example 5-13. Example 5-13 CE Egress QoS Service Policy Template for a VPN Site with Four CoSs
!identifies Routing Traffic
access-list 100 permit tcp any eq bgp any
access-list 100 permit tcp any any eq bgp
!
!identifies Management Traffic
access-list 101 permit ip host CE-loopback-address Management-subnet mask
!
!identifies VPN Voice traffic
access-list 102 permit classification-criteria-provided-by-customer-for-Voice
!
!identifies VPN Business Latency traffic
access-list 104
permit classification-criteria-provided-by-customer-for-Business-Latency
!
!identifies VPN Business Throughput traffic
access-list 105
permit classification-criteria-provided-by-customer-for-Business-Throughput
!
!identifies SAA Traffic
access-list 106 permit ip host CE-loopback-address
host SAA-shadow-router-address
access-list 106 permit ip host CE-loopback-address
host remote-CE-SAA-responder-router-address
!
!local route map (applies on locally generated traffic to
!mark management traffic)
ip local policy route-map LocalTraffic
!
route-map LocalTraffic permit 10
match ip address 101
set ip dscp 16
!
!class-map used below to exclude SAA traffic (from traffic to be policed)
class-map match-all class-NotSAA
match not ip access-group 106
!
!class-map used below to exclude SAA, Management, and Routing traffic
!(from traffic to be policed)
class-map match-all class-NotSAAManagementRouting
match not ip access-group 106
match not ip access-group 101
match not ip access-group 100
!
class-map match-any class-VPNVoice
match dscp 40
match dscp 46
match ip access-group 102
!
class-map match-any class-VPNLatency
match dscp 26
match ip access-group 104
!
class-map match-any class-VPNThroughput
match dscp 16
match ip access-group 105
match ip access-group 100
match ip access-group 101
!
policy-map police-VPNVoiceNotSAA
class class-NotSAA
police cir percent 25 bc 30 ms conform-action set-dscp-transmit 46
exceed-action drop
!
policy-map police-VPNLatencyNotSAA
class class-NotSAA
police cir percent 10 conform-action set-dscp-transmit 26
exceed-action drop
!
policy-map police-VPNThroughputNotSAAManagementRouting
class class-NotSAAManagementRouting
police cir percent 50 bc 400 ms conform-action set-dscp-transmit 16
exceed-action set-dscp-transmit 8
!
policy-map CE-to-PE-QoS-policy
class class-VPNVoice
priority
service-policy police-VPNVoiceNotSAA
class class-VPNLatency
bandwidth percent 10
service-policy police-VPNLatencyNotSAA
class class-VPNthroughput
bandwidth percent 50
random-detect dscp-based
random-detect exponential-weighting-constant 3
random-detect dscp 16 66 198 1
random-detect dscp 8 22 66 1
service-policy police-VPNThroughputNotSAAManagementRouting
class class-default
bandwidth remaining percent 100
set ip dscp 0
random-detect
random-detect exponential-weighting-constant 3
random-detect 22 66 1
!
map-class frame-relay map-class-CE-to-PE
frame-relay cir 512000
frame-relay mincir 512000
frame-relay bc 5120
frame-relay fragment 320
service-policy output CE-to-PE-QoS-policy
!
int serial0/0
tx-ring-limit 2
frame-relay traffic-shaping
!
int serial0/0.1
ip address CE-interface-prefix mask
frame-relay interface-dlci 100
frame-relay class map-class-CE-to-PE
!
rtr responder
!
Assuming a G.729-30 ms codec, each VoIP call represents about 20 kbps of traffic at the IP layer. This means that the VPN Voice CoS contracted rate (25 percent of the 512-kbps CIR) can accommodate six simultaneous VoIP calls. The burst tolerance configured in the VPN Voice policer is set to 30 ms so that it can accommodate the simultaneous burst of one packet from each of the six simultaneous calls. The packet size with G.729-30 ms calls is 76 bytes so that the maximum burst could be 6 * 76 = 456 bytes, which fits within 30 ms at a rate of 25 percent of 512 kbps.
The burst tolerance in the policer for the VPN Business Throughput CoS is configured so that it can accommodate one RTT-worth of traffic. Globenet tests demonstrated that this setting generally allowed effective interactions with TCP flows. Therefore, Globenet can collectively achieve a global transfer rate that is close to the contracted rate at all times (and more when spare capacity is available to accommodate out-of-contract traffic).
For fine-tuning of the regular RED profile on low-speed links, Globenet uses the same formulas as the ones used by Telecom Kingland and presented in the "CE Router Egress Policy" section in Chapter 4:
The exponential weighting constant n is such that
2n = 1 / B, where B = bandwidth / (MTU * 8)
(with MTU = 1500 bytes)
The minimum and maximum thresholds are equal to 100 percent and 300 percent of B, respectively. The maximum drop probability is set to 1.
However, unlike Telecom Kingland, Globenet offers a VPN Business Latency CoS that specifically addresses mission-critical traffic with a low delay requirement. Therefore, Globenet is more interested in optimizing achievable TCP throughput in the VPN Business Throughput CoS than minimizing its delay. Consequently, Globenet took a different WRED fine-tuning approach than Telecom Kingland. Rather than using the regular RED minimum and maximum thresholds for the in-contract traffic and using smaller (and hence very aggressive) thresholds for the out-of-contract traffic, Globenet used the regular RED minimum and maximum thresholds for the out-of-contract traffic and larger (and hence more lenient) thresholds for the in-contract traffic. Specifically, it used a minimum and a maximum threshold for the in-contract traffic. These are equal to 100 percent and 300 percent, respectively, of the maximum threshold used for out-of-contract traffic. (In other words, Globenet used a minimum and maximum threshold set to 300 percent and 900 percent of the pipe size, respectively.) This means that the average and maximum delay experienced by the in-contract traffic may be somewhat increased. However, this allows end users to significantly increase the effective TCP throughput they can obtain from the VPN Business Throughput CoS, even beyond their contracted rate, when the data path has spare capacity. The subset of mission-critical traffic with tight delay requirements can still be handled optimally through the VPN Business Latency CoS. These WRED drop profiles for the AF2 queue are illustrated in Figure 5-36.
Figure 5-36. WRED Drop Profiles in the AF2 Queue on the Edge
[View full size image]

PE Router Ingress Policy
On interfaces attaching unmanaged Internet CE routers, it is essential that incoming traffic be remarked with the DSCP value of the Internet CoS so that it gets the appropriate treatment throughout the Globenet network. It also shouldn''t be able to steal any resources destined for other CoSs, no matter what DSCP marking the Internet customer may be intentionally (or unintentionally) setting. To that end, Globenet applies a very simple QoS input policy over interfaces attaching unmanaged Internet CE routers. It systematically remarks the packet DSCP to 0 on all received traffic, as illustrated in Example 5-14.
Example 5-14. PE Router Ingress QoS Policy for Unmanaged Internet CE Routers
policy-map EdgeInInternet-QoS-policy
class class-default
set dscp 0
!
map-class frame-relay map-class-CE-to-PE
service-policy input EdgeInInternet-QoS-policy
!
int serial0/0.1
frame-relay interface-dlci 100
class map-class-CE-to-PE
Similarly, on interfaces attaching unmanaged VPN CE routers, Globenet needs to police the traffic sent in each CoS against its contracted rate. In the case of unmanaged Layer 3 MPLS VPN service, it is the customer''s responsibility to ensure that traffic sent by the unmanaged CE router toward the PE router has been marked according to Globenet''s DSCP values for the five Layer 3 MPLS VPN CoSs. So, on the PE router, Globenet only needs to perform classification on the basis of the DSCP field based on Globenet''s DSCP scheme. Example 5-15 shows an input QoS policy applied on an interface attaching an unmanaged Layer 3 MPLS VPN CE router.
Example 5-15. PE Router Ingress QoS Policy for Unmanaged MPLS VPN CE Routers
!
class-map match-any class-VPNVoice
match dscp 46
!
class-map match-any class-VPNVideo
match dscp 34
!
class-map match-any class-VPNLatency
match dscp 26
!
class-map match-any class-VPNThroughput
match dscp 16
!
policy-map EdgeInVPN-QoS-policy
class class-VPNVoice
police cir percent 25 bc 30 ms conform-action transmit exceed-action drop
class class-VPNVideo
police cir percent 25 conform-action transmit exceed-action drop
class class-VPNLatency
police cir percent 10 conform-action transmit exceed-action drop
class class-VPNthroughput
police cir percent 20 bc 400 ms conform-action transmit exceed-action set-dscp-
transmit 8
class class-default
set ip dscp 0
!
map-class frame-relay map-class-CE-to-PE
service-policy input EdgeInVPN-QoS-policy
!
int serial0/0.1
frame-relay interface-dlci 100
frame-relay class map-class-CE-to-PE
As a security measure against potential replacement or tampering with the Globenet managed CE router located on the customer premises, Globenet also applies per-CoS policing on interfaces attaching managed CE routers. The same type of input policies as for unmanaged CE routers (as shown in Example 5-15) are used for that purpose.
PE Router Egress Policy
The egress policy on the PE router to manage the link toward the CE router is very similar to the managed CE router egress policy (detailed in the "CE Router Egress Policy" section). The main differences is that classification can be performed directly on the DSCP values because all the traffic has already been classified and marked by the ingress CE router.
QoS Design for the Interprovider VPN with Telecom Kingland
One objective of the partnership with Telecom Kingland (discussed in Chapter 4) is to offer the same Layer 3 MPLS VPN service features to all sites of a customer, whether these sites are attached directly to Globenet or to Telecom Kingland. To that end, Telecom Kingland supports the Globenet QoS offering (including the five Globenet CoSs) on the access links attaching sites that belong to a Globenet VPN (instead of Telecom Kingland''s regular QoS offering with three CoSs: VPN Real-Time, VPN Premium, and VPN Standard). Using this method, an end customer has to deal with only a single QoS offering for its VPN, even when the VPN contains some sites attached to Globenet and other sites attached to Telecom Kingland.
Figure 5-37 illustrates how consistent QoS is achieved end to end between two VPN sitesone attached to Telecom Kingland and the other attached to Globenet. It also provides the QoS markings and policies at every step of the path.
Figure 5-37. QoS Markings and Policies in the Interprovider VPN with Telecom Kingland
[View full size image]

To facilitate end-to-end QoS operation, Telecom Kingland uses Globenet''s DSCP values for marking by the CE routers. This allows support of five CoSs end to end and avoids the need to map, at the boundary between the two networks, from one DSCP value scheme to another. Thus, Telecom Kingland applies on the CE router an egress QoS policy based on both Globenet''s DSCP marking scheme and its five-CoS offering.
For transport over its backbone, Telecom Kingland wants to schedule all the VPN traffic, regardless of its CoS, in its DF queue, which is the one designed to carry all VPN traffic, including VPN voice traffic. So it needs to make sure that the PE router sets the EXP field of the label stack entries pushed by the ingress PE router to the value corresponding to the DF queue in Telecom Kingland''s marking scheme, which is EXP=0. However, the PE router must not overwrite the DSCP value that was set by the CE router according to Globenet''s marking scheme. It is preserved and can be used by Globenet downstream of Telecom Kingland to apply the corresponding QoS treatment. To that end, on the ingress PE router, Telecom Kingland applies a specific input QoS policy on interfaces attaching CE routers belonging to a Globenet VPN. It contains a sophisticated marking configuration that leaves the DSCP field untouched but that sets the EXP field of all the pushed label stack entries to 0. This input policy is illustrated in Example 5-16. Note that if Telecom Kingland didn''t apply this policy, the traffic marked as VPN Business Latency for which the Globenet DSCP value is 26 and hence maps by default into EXP=3 would be scheduled in Telecom Kingland''s AF3 queue. This queue is dedicated to transport of essential control and signaling traffic because EXP=3 is the value allocated to its telephony transit signaling traffic by Telecom Kingland. Of course, this would be unacceptable to Telecom Kingland.
You observe that Telecom Kingland effectively manages to apply its own QoS scheme through its MPLS backbone while preserving the Globenet QoS marking in the DSCP of the transported IP packets. This optional capability of an MPLS DiffServ backbone is called QoS transparency. It can be very useful where multiple DiffServ administrative domains are interconnected, such as in the interprovider VPN scenario considered here with Globenet and Telecom Kingland. The specific method that Telecom Kingland uses to achieve QoS transparency is called the short pipe DiffServ tunneling model. It is characterized by the following:
On MPLS imposition Mark the imposed EXP value without modifying the QoS marking of encapsulated traffic (DSCP in the considered scenario).
On MPLS disposition Ignore the EXP value as received before the MPLS label pop, leave QoS marking of the exposed header unchanged (DSCP in the considered scenario), and use this QoS marking for egress scheduling.
A second DiffServ tunneling model called the pipe model supports QoS transparency. The only difference with the short pipe model is that, on MPLS disposition, egress scheduling is based on the EXP value as received before MPLS label pop. This is useful in environments where the MPLS network operator does not want to know the DiffServ policy of the transported traffic, even on the disposition router. Because Telecom Kingland is aware of the Globenet DiffServ policy and the Globenet policy is the one to be applied at the egress of the Telecom Kingland core, Telecom Kingland deployed the short pipe model.
Example 5-16. Telecom Kingland''s PE Router Ingress QoS Policy Template for Globenet''s VPN
policy-map EdgeInGlobenet-QoS-policy
class class-default
set mpls exp imposition 0
!
map-class frame-relay map-class-CE-to-PE-Globenet
service-policy input EdgeInGlobenet-QoS-policy
!
int serial0/0.1
frame-relay interface-dlci 100
class map-class-CE-to-PE-Globenet
Because Layer 3 MPLS VPN inter-AS option A is used between Telecom Kingland and Globenet, the Telecom Kingland egress PE router pops the MPLS label stack and transmits the VPN packets as native IP packets over a link between back-to-back VRFs. As per the default operation of PE routers in the Cisco IOS implementation, the egress PE router leaves the exposed DSCP field untouched. Therefore, it contains DSCP markings as set by the ingress CE router according to Globenet''s scheme.
As explained in the "Layer 3 MPLS VPN Service Design" section, the back-to-back VRF links are instantiated as VLANs over a Gigabit Ethernet link between the Globenet PE router and the Telecom Kingland PE router. For proper treatment of the Layer 3 MPLS VPN packets on that Gigabit Ethernet link, Telecom Kingland again applies a QoS egress policy based on Globenet''s marking. This policy is similar to the three-queue policy applied by Globenet in the North America region. Note that there is no need to apply hierarchical QoS policies with some per-VLAN QoS policy on that interface. (An example is a first level of policy at the VLAN level enforcing per-VLAN shaping, with a second level of policy applying the three queues separately for each VLAN over its shaped rate.) Globenet is marketing a seamless QoS service through Telecom Kingland. Therefore, the user remains unaware of the interprovider boundary and does not subscribe to any contracted bandwidth specific to the interprovider boundary to be enforced by Globenet. Rather, scheduling is applied purely based on the three CoSs and irrespective of which VPN/VRF packets belong to.
Finally, Globenet receives packets and treats them in the exact same way as the rest of the Globenet Layer 3 MPLS VPN traffic, because they are received with their marking.
End-to-end QoS operations in the reverse direction (that is, for traffic from a site connected to Globenet toward a site connected to Telecom Kingland) are very similar. Globenet handles packets in the exact same way as it does for the rest of the Layer 3 MPLS VPN traffic. In this direction, Telecom Kingland needs to apply the following:
An ingress QoS policy to set the imposed EXP to 0 on the PE router receiving the traffic from Globenet
A three-queue egress QoS policy on the egress PE router that is based on Globenet''s DSCP marking scheme, on the interfaces attaching Layer 3 MPLS VPN sites belonging to a Globenet VPN
QoS Design for Multicast Traffic
Globenet''s mVPN service definition includes the following treatment of multicast traffic:
All the multicast traffic is handled in the VPN Standard CoS. This ensures that multicast traffic cannot affect any of the other Layer 3 MPLS VPN CoSs.
The aggregate multicast traffic generated by each site is limited to an agreed-upon rate that is smaller than the access link speed. This provides some basic protection against one site''s sending multicast traffic at full access link speed. In turn, this provides some basic protection to the unicast traffic sharing the VPN Standard CoS with the mVPN traffic.
On its managed CE routers, Globenet classifies the multicast traffic as belonging to the VPN Standard CoS based on the multicast address prefix range. Then, as part of the existing VPN Standard CoS policy, the multicast packets have their DSCP field set to DSCP=0, just like unicast packets do. At the ingress PE router, it is worth remembering that multicast packets do not get encapsulated in MPLS, but rather in a multicast Generic Routing Encapsulation (GRE) tunnel. As per the default behavior of PE routers in the Cisco IOS implementation, the Globenet ingress PE routers copy the DSCP of the received multicast packet to the DSCP of the imposed multicast GRE header. Because the DSCP of the multicast packet was set to DSCP=0 on the CE router, all mVPN packets are sent to the core with the same DSCP=0 value in their multicast GRE header. As mentioned, Globenet''s current QoS policies in the core ensure that any non-MPLS encapsulated IP packets with DSCP=0 are captured in the class default and thus are handled as part of the VPN Standard CoS in the DF queue. Finally, on the egress PE router, for scheduling over the PE-CE link, multicast packets are automatically classified by the existing egress QoS policy as belonging to the VPN Standard CoS. Hence, they are scheduled in the DF queue because of their DSCP=0 field (which is captured by the class default).
Globenet enforces rate limiting of the aggregate multicast traffic via the ip multicast rate-limit command.
QoS Design for the IPv6 VPN
Globenet currently supports a single CoS for the IPv6 VPN traffic, which is the VPN Standard CoS. In other words, all IPv6 traffic within a VPN is handled in the VPN Standard CoS on the CE-PE link, in the core, and on the PE-CE links.
With respect to the CE-PE link, the IPv6 traffic is automatically classified in the class default of a managed CE router because it does not match any of the explicit classification criteria defined for IPv4 by Globenet. Thus, it is naturally handled in the DF queue of the VPN standard CoS.
For handling through the core, Globenet uses the default 6VPE behavior in IOS on the ingress PE router. This sets the EXP field to 0 for all label stack entries pushed on IPv6 VPN packets (regardless of the DSCP value in the IPv6 packets). Then, because of this EXP=0 marking, all MPLS packets carrying IPv6 VPN packets get automatically classified in the DF queue by the existing core QoS policy. (Classification, which operates on the outmost EXP field, is entirely unaware of what is actually carried inside the MPLS packet.) This applies to both managed and unmanaged CE routers.
Finally, on the egress PE router, for scheduling over the PE-CE link, IPv6 packets are again captured by the class default and scheduled in the DF queue.
In the future, after the corresponding operation has been validated, the exact same QoS offering as for IPv4 VPNs will be offered for IPv6 VPNs. Note that this does not mean that five CoSs will be supported for IPv6 in addition to the five existing CoSs for IPv4. Rather, it means that the existing five CoSs will then be useable indiscriminately by any subset of IPv4 and IPv6 traffic. Just as customers today can specify their arbitrary classification criteria for which subset of IPv4 traffic belongs to which CoS. Customers in the future will be able to add their arbitrary criteria for which subset of IPv6 traffic also belongs to which of the five CoSs. The definition of the five CoSs, as well as their associated SLA commitments, will remain unchanged. The QoS design will build on the existing IPv4 VPN design. It will have the additional abilities for CE routers to perform detailed classification of IPv6 traffic and to mark the DSCP field of IPv6 packets (in accordance with Globenet''s DSCP scheme already defined for IPv4 VPN). Also, the ingress PE routers will be configured to apply to IPv6 VPN traffic the same default DSCP-to-EXP mapping as for the IPv4 traffic. The egress PE routers will be configured to classify IPv6 traffic based on their DSCP field (as they do today for IPv4 traffic). QoS operations in the core will remain unchanged.
Pseudowire QoS Design for ATM Trunking
TE Design for ATM Pseudowires" section.
Example 5-17 shows the configuration details of marking the EXP field of the pseudowire traffic and steering this traffic into the dedicated MPLS TE tunnels (using the preferred path feature).
Example 5-17. PE Router Template for ATM Pseudowires
!
policy-map EdgeInATM-QoS-policy
class class-default
set mpls exp imposition 5
!
pseudowire-class ATM-Trunk1
encapsulation mpls
protocol ldp
preferred-path interface tunnel tunnel-Id1 disable-fallback
! where tunnel-Id1 is the tunnel Id of the "Pseudowire Tunnel" starting
! on this PE-router and terminating on the remote PE-router PE1
!
pseudowire-class ATM-Trunk2
encapsulation mpls
protocol ldp
preferred-path interface tunnel tunnel-Id2 disable-fallback
! where tunnel-Id2 is the tunnel Id of the "Pseudowire Tunnel" starting
! on this PE-router and terminating on the remote PE-router PE2
interface ATM1/0/0
encapsulation aal0
atm mcpt-timers 10 20 60
xconnect remote-pe-ip-address pw-class ATM-Trunk1
cell-packing 5 mcpt-timer 2
service-policy input EdgeInATM-QoS-policy
!
interface ATM1/1/1
encapsulation aal0
atm mcpt-timers 10 20 60
xconnect remote-pe-ip-address pw-class ATM-Trunk2
cell-packing 5 mcpt-timer 2
service-policy input EdgeInATM-QoS-policy
Last Resort Unconstrained Option," all TE tunnels are configured with a last resort unconstrained option. This ensures that the TE tunnel should always be routable and established, regardless of a failure situation, as long as there is IP connectivity between headend and tail-end. Thus, the only case in which the TE tunnel carrying ATM pseudowire traffic goes down is if there is indeed a complete loss of connectivity between the corresponding two PE routers or if there is a failure of the MPLS TE control plane because of a bug or operational misconfiguration. Although these situations are expected to occur very rarely, Globenet preferred to ensure fast and predictable reaction, maximizing the chances of recovery in the ATM network.
Note
The PE router configuration includes disabling auto-route announce on the TE tunnels used to transport ATM pseudowire traffic to make sure that no other traffic is routed onto these tunnels.
SLA Monitoring and Reporting
Globenet performs ongoing active measurement using IOS Service Assurance Agent (SAA) in a very similar manner to Telecom Kingland. It too uses dedicated SAA routers in every POP and activates the SAA responder on all its CE routers. It generates different sample streams (with different packet size and different DSCP marking) for each of the five CoSs and for POP-to-POP measurement in EMEA, AsiaPac, and South America. Within North America only three different samples are generated, because only three queues are maintained in the core in that region.
Globenet also uses SAA to perform site-to-site measurement for the five CoSs when its customers request this service. For example, in large VPNs, these site-to-site measurements may be deployed among regional sites and headquarters sites, or among remote sites and the hub site(s) of the VPN.
Globenet collects and compiles this measurement data to provide end users with the monthly QoS metrics for each CoS (POP-to-POP and, where applicable, site-to-site within its VPN). These are used to validate the contractual SLA commitments. The details of this measurement method, including sample frequency, sample size, and aggregation formulas across samples, are clearly spelled out in the SLA because they condition the computed values.
For network engineering purposes, Globenet also polls, on every core interface, at 15-minute intervals the MIB counters providing statistics for each queue in the core. In particular, it tracks the following for every queue:
The number of bytes and packets that were scheduled in the queue
The number of bytes and packets that were dropped
The current queue depth
Where RED/WRED is used, the number of bytes and packets that were dropped (for each RED/WRED profile)
For the EF queue, how many packets were policed by the conditional policer
Globenet uses this information to confirm that the respective allocation of bandwidth across the CoS is appropriate and possibly, over time, to refine these relative allocations. Globenet also uses this information to validate current capacity planning (or trigger capacity upgrade where needed).
On customer request, Globenet also polls similar counter information on interfaces supporting PE-CE links and CE-PE links and gives the customer reports summarizing that information. This helps the customer validate, and adjust where needed, its access rate, the percentage of bandwidth allocated to each CoS, and the distribution of customer traffic over these CoSs.
