mVPN Service Application
[GRE] tunnels between customer CE routers. From a long-term perspective, TK realized that this approach clearly was not scalable, primarily because of packet replication requirements at the CE routers, the lack of packet replication in the core, and the management and number of IP tunnels required between customer sites. (Packet replication would be more optimal, especially for high-rate Multicast sources of traffic.) However, this approach provided a solution for the few customers that needed Multicast support.Over time, TK's VPN customers requested more Multicast services. Therefore, TK chose to deploy Multicast services within a customer VPN on the MPC network but not on the international network. The solution chosen was based on the mVPN model that was discussed in Chapter 1.
Multicast Address Allocation
Multicast support within a VPN does not imply that TK should advertise the Multicast addresses used to the IP Multicast community at large. For this reason, TK chose to use the organization local-scope Multicast address block (see [local-scope]), 239.192.0.0/14, for its mVPN service offering. This provides a usable address range of 239.192.0.0 through 239.195.255.255. These addresses are used for the Multicast domains (default-MDT) and any associated data MDTs.Within the initial Multicast design, the address range 239.192.0.1 through 239.192.15.254 is reserved for default MDTs, yielding a maximum of 4096 Multicast VPNs. This number was considered adequate for the short to medium term. If further addresses are needed in the future, a new range of 239.192.16.0/20 will be used and therefore is reserved.Address blocks 239.192.32.0/20, 239.192.48.0/20, and 239.192.64.0/20 are reserved for data MDTs. Table 4-3 summarizes the Multicast address allocation.
Address Block | Service Allocation | Current Usage |
---|---|---|
239.192.0.0/20 | mVPN default MDT | Currently in use |
239.192.16.0/20 | mVPN default MDT | Reserved |
239.192.32.0/20 | mVPN data MDT | Currently in use |
239.192.48.0/20 | mVPN data MDT | Reserved |
239.192.64.0/20 | mVPN data MDT | Reserved |
Multicast Routing Protocol Support
The backbone Multicast routing protocol chosen for the design is PIM-SM. PIM-SM is used on all P router to P router links. If an mPE router provides Multicast services (and not all mPE routers within the MPC do), its PE-P or PE-PE link(s) are also enabled for PIM-SM. PIM-SM is not enabled on any edge device that does not provide Multicast services (so Multicast services cannot be offered on those devices).PIM-SM is used for the default MDT for any Multicast VPNs. This default MDT carries any Multicast control traffic generated for a given Multicast VPN. However, PIM-SSM is used for any data MDTs that are created for these VPNs. This provides a more optimal path for the traffic, and it does not require registration/join with the rendezvous point (RP), but rather directly with the source.To reduce the amount of state carried at the edge of the network, the default SPT threshold is set to infinity on all mPE routers. The SPT threshold specifies when a leaf router should join the shortest path source tree for the specified group. Note that the normal default setting in Cisco IOS is 0, which would mean that an immediate join is sent toward a given source after the first Multicast packet from that source is received. Because PIM-SM is only used for the default MDTs, setting the SPT threshold to infinity ensures that all Multicast control traffic flows via the rendezvous point. This has the advantage of reducing the amount of {S, G} state at the mPE routers. The disadvantages are that a nonoptimal routing path is used, and a very robust RP in terms of switching/replication capabilities is required.Group-to-RP mappings for the default MDTs are distributed using the bootstrap router (BSR) capability. This provides an automatic distribution mechanism of Multicast groups and indicates which RP should be joined. Candidate BSRs (C-BSRs) originate bootstrap messages that contain a priority field. This is used to select which C-BSR router becomes the elected C-BSR.All the previously described capabilities are illustrated in Figure 4-19. Example 4-3 provides the basic configuration template for this design.
Figure 4-19. Backbone Multicast Design
[View full size image]

Example 4-3. mVPN Configuration Template for PE Routers
ip multicast-routing vrf vrfname
ip vrf vrfname
mdt default default-MDT-for-this-VPN
!
ip pim spt-threshold infinity group-list MDT-range
!
ip access-list standard MDT-range
permit 239.192.0.0 0.0.15.255
Rendezvous Point and BSR Design for PIM-SM
Because PIM-SM is used in the backbone network, rendezvous points are needed. This is because PIM-SM operates by default over a unidirectional shared tree whose root is the rendezvous point. Last-hop routers join this tree when they have receivers that are interested in receiving traffic for a given Multicast group. Therefore, all mPE routers that have Multicast-enabled VPNs join the shared tree by sending a join toward the RP for each Multicast domain.Placement of the RPs generally depends on the location of Multicast receivers, the amount of traffic that will flow via the RP, and the location of the Multicast senders. Because TK only uses PIM-SM for the default MDTs in its Multicast VPN service, the location of senders and receivers is of less importance because PIM-SSM is used to directly join the source of any data MDTs. Therefore, TK decided to deploy an RP in four of the six Level 1 POPs and to allow each of these to be C-BSRs also.Because each of the Level 1 POPs has both P routers and mPE routers, consideration was given as to which of these devices might perform the RP and BSR functionality. The mPE routers were considered, but because they were already providing edge functionality for various services, TK thought it was inappropriate to burden them with additional control-plane functionality. The P routers were also rejected, because their main purpose was considered to be switching packets rather than control-plane activity (hence the Internet-free core design you saw earlier). Therefore, the final design decision was that the RPs/BSRs would be standalone routers that attach directly to the core P routers, as shown in Figure 4-20.
Figure 4-20. Rendezvous Point POP Design
[View full size image]

Use of Data-MDTs in the mVPN Design
The default behavior for a PE router receiving a Multicast packet on an mVRF interface is to forward the packet using the default MDT. This means that all PE routers that have joined the default tree receive the Multicast traffic, regardless of whether they have interested receivers. In such situations a PE router simply drops the Multicast traffic, but this is clearly suboptimal. Therefore, the mVPN architecture allows for the creation of data MDTs on a per-customer basis. Data MDTs are created based on predetermined bandwidth limits and receipt of traffic matching the limits from locally attached customer sources. Only PE routers that have interested receivers for the group will join the data MDT.TK selected to use the default MDT for customer and backbone control traffic only (such as customer-specific joins and so on). Therefore, data MDTs are used within the design for any Multicast traffic that exceeds a predefined threshold. (This threshold is configured as 1 Kbps, which essentially causes a data MDT to be created for each source in a customer mVPN.) Example 4-4 shows the additional commands that are added to the template from Example 4-3.
Example 4-4. mVPN Configuration Template for Data MDTs
The number of data MDTs given to a particular mVPN is determined on a customer-by-customer basis. TK uses the mdt log-reuse command in each mVRF configuration so that it can receive a syslog message whenever a data MDT is reused. This helps over time to determine how many data MDTs a particular customer needs. It also balances core Multicast state against sending traffic to mPE routers that do not have receivers for a given Multicast group.PIM-SSM rather than PIM-SM is used to signal data MDTs. This allows each mPE router to join the source of a given customer Multicast group directly rather than via an RP.
ip vrf vrfname
mdt data data-MDT-from-range-allocated-to-mVPN threshold 1
!
ip pim ssm range Data-MDT
!
ip access-list standard Data-MDT
permit 239.192.32.0 0.0.15.255
Restricting Multicast Routing State at mPE Routers
Because mPE router memory is a finite resource, TK decided to restrict the number of Multicast routes allowed in a given mVRF. Multicast routes are called mroutes. They are used in the forwarding path and are specific to a given customer Multicast domain.The mroutes are restricted through the use of the ip multicast vrf vrfname route-limit command. TK generates a warning when a 60 percent threshold of the maximum configured mroutes is reached. The maximum number of mroutes differs depending on the customer requirements. However, for ease of management, TK chose to use a default value of 300. The company might change this value as it gains more experience with its customer requirements.