Definitive MPLS Network Designs [Electronic resources] نسخه متنی

اینجــــا یک کتابخانه دیجیتالی است

با بیش از 100000 منبع الکترونیکی رایگان به زبان فارسی ، عربی و انگلیسی

Definitive MPLS Network Designs [Electronic resources] - نسخه متنی

Jim Guichard; François Le Faucheur; Jean-Philippe Vasseur

| نمايش فراداده ، افزودن یک نقد و بررسی
افزودن به کتابخانه شخصی
ارسال به دوستان
جستجو در متن کتاب
بیشتر
تنظیمات قلم

فونت

اندازه قلم

+ - پیش فرض

حالت نمایش

روز نیمروز شب
جستجو در لغت نامه
بیشتر
لیست موضوعات
توضیحات
افزودن یادداشت جدید





Multicast VPNs


This section quickly reviews the basic components of IP Multicast and then looks at how multicast can be provided to customers using a Layer 3 MPLS VPN service.

IP Multicast provides a mechanism for transporting data from a single source to many recipients (called receivers). This is in contrast to IP Unicast, in which a packet is sent from a single source to a single recipient. The destination address of a multicast packet is taken from the Internet Assigned Numbers Authority (IANA) 224.0.0.0239.255.255.255 address block, and each address refers to a multicast group. In other words, multiple endpoints may belong to the group and receive traffic addressed to the group address. This is similar in concept to broadcast, except that it is restricted to members of a group rather than to all hosts on a given subnet, as is the case with broadcast.

A multicast source, such as a server delivering a multicast-capable application, transmits multicast packets using the multicast group address, and receivers listen for traffic that is addressed to the group of which they are members. These packets are forwarded across the network using a distribution tree. Each network element in the path from source to receiver(s) is responsible for replicating the original packet at each branch of the tree. Only a single copy of the original packet is forwarded across any particular link in the network, thus creating an efficient distribution tree for many receivers. There are two different types of distribution trees: source trees and shared trees.


Source Distribution Multicast Trees


A source tree allows a source host of a particular multicast group to be located at the "root" of the tree while the receivers are found at the ends of the branches. Multicast packets travel from the source host down the tree toward the receivers. Multicast forwarding state is maintained for the source tree using the notation {S, G}, where S is the source IP address and G is the group.

Figure 1-18 shows a source tree in which host 172.27.69.52 sends multicast packets down the tree to destination group 239.192.0.7. The {S, G} state for this multicast stream is therefore {172.27.69.52, 239.192.0.7}.


Figure 1-18. IP Multicast Source Tree

Source trees are also sometimes called Shortest Path Trees (SPTs) because the path between source and receivers is the shortest available path. This means that a separate source tree is present for every source that transits multicast packets to a given group, and therefore {S, G} state for each {Source, Group} exists in the network. Therefore, even though source trees provide the most optimal routing, it is at a priceadditional multicast state within the network.

A receiver on a source tree must know the source of the treethat is, its IP address. To join the source tree, a receiver must send an explicit {S, G} join to the source (the local router actually sends this on behalf of the receiver).


IP Multicast Shared Trees


Shared trees have the root of the tree at a common point somewhere in the network. This common point is called the rendezvous point (RP) and is where receivers join so as to learn about active sources. Multicast sources transmit their traffic to the RP. When receivers join a group on the shared tree, the RP forwards packets from the source toward the receivers. In this way the RP is effectively a go-between where source and receiver come together.

Multicast forwarding entries for shared trees use a different notation, {*, G}, where * represents any source. Figure 1-19 shows a shared tree for group 239.192.0.7.


Figure 1-19. IP Multicast Shared Tree

[View full size image]

Shared trees are not as optimal as source trees because they do not follow the shortest path. Instead, all traffic from the source travels via the RP toward the receivers. However, the amount of state held in the network is less because the {*, G} notation removes the requirement for specific {S, G} entries. A further difference is that shared trees do not require the receivers to know the IP address of a particular multicast source. The only address needed by the receivers is that of the RP.

The shared tree shown in Figure 1-19 is a unidirectional tree. However, IP multicast also supports bidirectional trees, in which traffic may travel up and down the tree. This type of tree is useful when the tree has a large number of sources. Not all traffic needs to pass through the RP, because it can travel up and down the tree.


Protocol-Independent Multicast (PIM)


Clearly IP Multicast needs a mechanism that can create a multicast routing table based on some discovery mechanism. The most common protocol in use today is Protocol-Independent Multicast (PIM). PIM uses the unicast routing table to check whether a multicast packet has arrived on the correct inbound interface, a process called Reverse Path Forwarding (RPF). This check is independent from the routing protocol because it bases its decisions on the unicast routing table contents.

PIM comes in two flavors: Dense Mode (PIM-DM) and Sparse Mode (PIM-SM).

PIM Dense Mode (PIM-DM)


PIM-DM has proven to be a pretty inefficient mode of operation, because it is based on the assumption that every subnet in the network has at least one receiver for a given {S, G} group. This clearly is not normally the case.

Because of this assumption, PIM-DM floods all multicast packets to every part of the network. Each router that does not want to receive the multicast traffic is required to send a prune message back up the tree to prevent the traffic from being sent to it. Branches to which no receivers are attached are thus pruned from the tree.

PIM Sparse Mode (PIM-SM)


PIM-SM is far more efficient than PIM-DM because it does not rely on flooding to distribute multicast traffic. PIM-SM uses a pull model in which it transits traffic toward a given receiver only if specifically asked to do so. This requires an explicit join for the given multicast group. Initially all receivers join the RP on the shared tree but can join a source tree based on bandwidth thresholds that are defined in the network. This has the advantage of moving the receiver onto the optimal path to the source of the multicast traffic.


Source-Specific Multicast (SSM)


[IGMPv3], IGMPv3lite, or URL Rendezvous Directory [URD].) Therefore, a source tree is always built when using SSM, so shared trees and rendezvous points are not required when running SSM. This has the advantage of providing optimal routing between source and receiver without having to first discover the source from the RP.


Multicast Support Within a Layer 3 MPLS VPN


Reviewing the basic functionality of IP multicast highlights an issue for the operator if it wants to extend that service to it Layer 3 MPLS VPN customers; this issue is one of scale. Given the amount of multicast state that might be generated by each VPN customer, the service provider backbone P network would need to be engineered so that it could distribute and store all the IP multicast information for each customer. A further issue could involve IP address conflict between different customers.

IP tunneling (such as GRE) is one method of eliminating the customer multicast state from the P network because the IP tunnels are overlaid across the MPLS/IP network. This also prevents the service provider from having to run any IP Multicast protocols in the P network, because all packets are sent as unicast. However, this approach has several disadvantages, including a full mesh of IP tunnels between CE routers. Also, nonoptimal multicast packet forwarding is achieved and bandwidth is wasted because of the replication of packets across all IP tunnels. Furthermore, the number of tunnels introduces an operational and management overhead that is very difficult to control.

Another more scalable approach is documented in [mVPN], which introduces the concept of multicast domains, in which CE routers maintain PIM adjacencies with their local PE router instead of with all remote CE routers. This is the same concept as deployed with the Layer 3 MPLS VPN service, where only a local routing protocol adjacency is required rather than multiple ones with remote CE routers. Within a multicast domain, an end customer can maintain his or her existing multicast topology, configurations, and so on and transition to a multicast service provided by the Layer 3 MPLS VPN operator. In this model P routers do not hold any customer-specific multicast trees but instead hold a single group for that VPN, regardless of the number of multicast groups deployed by the end customer. Regardless of which multicast mode the service provider is using (PIM-SM, PIM-DM, SSM, and so on), the amount of state in the P network can be determined and is not dependent on the specifics of a given customer multicast deployment.


Multicast Domains


The concept of a multicast domain is realized by a set of multicast-enabled VRFs that can send multicast traffic between them. This means that if a given CE router sends IP multicast packets toward its local PE router, that PE-router can deliver the multicast traffic to all interested parties at remote CE routers. The multicast-enabled VRFs are called mVRFs.

A multicast domain essentially maps all customer multicast groups within a given VPN into a single unique global multicast group within the P network. The service provider administers this global multicast group. The mapping is achieved by encapsulating the original multicast packet in a GRE packet whose destination address is a multicast group known globally within the service provider P network, and it is associated with the given multicast domain. The source of the GRE packet is the ingress PE router through which the multicast packet first entered the Layer 3 MPLS VPN network. Therefore, a set of VPN customer multicast groups can be mapped into a single {S, G} or {*, G} entry in the service provider P network.

The P network is responsible for building a default multicast distribution tree (called the default MDT) between PE routers for each multicast domain it supports. The unique multicast group in the P network is called an MDT group. Every mVRF belongs to a default MDT. Figure 1-20 illustrates the multicast group concept and shows two separate VPNs, each with its own multicast group within the service provider P network.


Figure 1-20. Multicast Domains for VPN Multicast

[View full size image]


mVPN PIM Adjacencies


Various PIM adjacencies are formed within an mVPN environment. The first adjacency is held within each VRF that has multicast routing enabled. This adjacency runs from PE router to CE router. The customer multicast routing information, which is created by each PIM instance, is specific to the corresponding mVRF.

In addition to the PE-CE adjacency, the PE router forms a PIM adjacency with any remote PE routers that hold mVRFs that belong to the same multicast domain. This PIM adjacency is accessed via a Multicast Tunnel Interface (MTI), which uses GRE encapsulation, and is used as the transport mechanism between mVRFs. This PIM adjacency is necessary to exchange multicast routing information contained in the mVRFs and specific to customers attached to the said mVRF.

The last PIM adjacency to be created is within the global PIM instancethat is, the PIM instance that runs in the service provider P network. The PE router maintains global PIM adjacencies with each of its IGP neighbors (P routers in most cases). The global PIM instance is used to create the MDTs used to connect mVRFs.

All these adjacencies are shown in Figure 1-21.


Figure 1-21. mVPN PIM Adjacencies

[View full size image]


Multicast Forwarding with mVPN


Having established all the necessary state for a given multicast domain, forwarding of multicast traffic can be divided into two categories: packets from the C network (C packets) received at the PE router interface associated with a given mVRF, and packets from the P network (P packets) received from other PE routers via a global multicast interface.

When C packets are received at a PE router, the following events take place:

A C packet arrives via a VRF interface that is associated with a given mVRF.

The C packet is replicated based on the contents of the outgoing interface list (olist) for the {S, G} or {*, G} entry in the mVRF. The olist may contain interfaces that are multicast-enabled within the same mVRF, in which case the packets are forwarded using normal multicast procedures. The olist also may contain a tunnel interface (MTI) that connects it to the multicast domain for the customer VPN.

If the olist contains a tunnel interface, the multicast packet is encapsulated using GRE. The packet's source is set to the BGP peering address of the PE router. The destination address is set to the MDT group address associated with the customer VPN.

The IP precedence value of the C packet is copied to the P packet.

The C packet is considered a P packet in the global multicast routing instance.

The P packet is forwarded through the P network using standard multicast procedures. P routers are thus unaware of any multicast VPN activity and treat the P packets as native multicast packets.

After the P packet is sent to the P network at the ingress PE router, the following events take place:

The P packet arrives at the egress PE router interface in the global multicast domain.

The P packet {S, G} or {*, G} entry is determined within the global multicast routing table.

The P packet is replicated out of any P network interfaces that appear in the olist of the {S, G} or (*, G} entry.

If the {S, G} or {*, G} entry has the Z flag set (signifying that multicast packets are received or transmitted on an MTI, this indicates to the receiving PE router that this is an MDT group packet and therefore must be de-encapsulated to reveal the original C packet.

The destination mVRF of the C packet is determined from the MDT group address in the P packet. The incoming MTI interface is resolved from the MDT group address.

The C packet is presented to the target mVRF, with the appropriate MTI interface set as the incoming interface. The RPF check verifies the tunnel interface as valid. In other words, packets with this source are expected to arrive via the MTI interface.

The C packet is treated as a native multicast packet in the VPN network. The C packet is replicated to all multicast-enabled interfaces in the mVRF that appear in the olist for the {S, G} or {*, G} entry.

All this activity is shown in Figure 1-22.


Figure 1-22. mVPN Multicast Packet Forwarding

[View full size image]

Figure 1-22 shows a source with IP address 194.27.62.1 transmitting to multicast group 239.194.0.1 within the customer VPN. When the multicast C packets from this source arrive at the ingress PE router, a P packet is created by encapsulating the C packet using GRE. The source IP address of the P packet is set to 10.1.1.11 (the BGP peering address of this PE router), and the destination address is set to 239.192.10.1 (the MDT group for this VPN in the P network).


/ 96