Multicast over IPSec VPNsChapter 2, "IPSec Overview," IPSec protection of GRE between the VPN sites performs encapsulation in GRE that results in a unicast frame. The multicast traffic between sites is merely payload for the GRE tunnel that is protected by IPSec. The multicast processes are associated with the tunnel interfaces and hidden from the underlying IPSec processes. Multicast over IPSec-protected GREThe most common reason for deploying IPSec VPNs over GRE tunnels is to support dynamic routing protocols between sites of the VPN that use IP multicast such as EIGRP and OSPF. Most multicast applications are essentially point-to-multipoint where there is a single source and many receivers. Clearly, the hub-and-spoke network architecture shown in Figure 8-12 will serve this application well, assuming that the source of the multicast traffic is co-located with the hub.
Figure 8-12. Multicast over IPSec-encrypted GRE tunnels
The configuration of multicast on hub-and-spoke topology is shown in Example 8-2. The configuration of a basic multicast capability on IOS is rather simple. It is important to understand that most multicast protocols rely upon the router's existing routing topology forwarding information base (FIB) derived from protocols such as IGP and BGP. Multicast protocols such Protocol Independent Multicast (PIM) use the FIB to determine where to send multicast join messages based on Reverse Path Forwarding (RPF), which is the shortest path back to the source. First, you'll need to enable multicast globally on the router, then you'll need to enable multicast on each of the eligible interfaces (that is, GRE tunnel interfaces). In this case, simply enable multicast on the tunnel interfaces at the hub-and-spoke VPN gateways. PIM sparse mode (PIM-SM) has been configured on the GRE tunnel interface in our example. PIM dense mode (PIM-DM) could also be used as the multicast adjacency protocol, but it is not recommended because dense mode will send multicast traffic to a site irrespective of whether the site has receivers. Example 8-2. Multicast Configuration on GRE/IPSec Spokevpn-gw1-west#show run interface Tunnel 1 interface Tunnel1 description Tunnel to spoke-1-west ip address 10.2.2.1 255.255.255.252 ip pim sparse-mode tunnel source 9.1.1.10 tunnel destination 9.1.1.22 tunnel protection ipsec profile gre vpn-gw1-west#show run interface Tunnel 2 interface Tunnel2 description Tunnel to spoke-2-west ip address 10.2.2.5 255.255.255.252 ip pim sparse-mode tunnel source 9.1.1.10 tunnel destination 9.1.1.138 tunnel protection ipsec profile gre vpn-gw1-west# show run interface FastEthernet0/1 interface FastEthernet0/1 description VPN RP interface ip address 10.1.1.1 255.255.255.0 ip pim sparse-dense-mode In this configuration, the hub's Ethernet interface (10.1.1.1) has been designated as the rendezvous point (RP) for the Multicast VPN. The topology assumes the hub router does the multicast replication. This places a significant burden on the hub router as it must perform IPSec protection, GRE encapsulation/decapsulation, manage the routing protocol on each GRE tunnel interface, and replicate multicast frames across each tunnel serving a downstream multicast receiver. The combination of these functions typically limits the scalability of the network due to processing constraints at the hub. In order to alleviate the burden of replicating and forwarding multicast streams on the VPN hub, some VPN architectures leverage the spoke-to-spoke topology of the GRE/IPSec tunnels in order to conserve packet processing resources at the hub site. Assume that a full-mesh IPSec VPN is justified between the potential multicast application participants. In that case, we would build a full-mesh IP tunneled network where each IP tunnel is encrypted with IPSec. Multicast on Full-Mesh Point-to-Point GRE/IPSec TunnelsThe set of full-mesh IP tunnels may be established in one of two ways. The first approach is to statically build an IP/GRE tunnel between each VPN gateway serving a multicast endpoint. A statically configured IPSec proxy builds an SA that encrypts the associated IPSec/GRE tunnel. The GRE/IPSec tunnels will establish an IPSec connection between the spokes only if there is data that must pass over the tunnel. Some designers may be tempted to build GRE/IPSec tunnels with static routing and no keepalives between the spokes in order to minimize the number of active GRE/IPSec connections on each spoke. However, once you configure multicast on the GRE tunnel interface, the multicast processes attempt to find peers capable of multicast adjacencies. Adjacencies are built by multicasting Hello messages on each multicast-enabled interface to identify potential peers as described in RFC 2362. Once the peers are established, an adjacency is sustained by periodic Hello messages. Figure 8-13 provides an example in which the network architecture leverages spoke-to-spoke use of GRE/IPSec tunnels to mitigate the transient traffic at the hub site.
Figure 8-13. Multicast Implications for Temporal GRE/IPSec Full Mesh
The multicast adjacency process sustains every GRE/IPSec tunnel to validate that link as a viable path. If resource conservation was a primary concern at any of the GRE/IPSec nodes, then the multicast Hello protocol just violated that assumption because every possible GRE/IPSec path is established. The PIM-SM avoids sending multicast streams until receiving explicit joins whereas the PIM-DM multicast processes prune back the multicast flows to the minimal distribution tree required. Nevertheless, both dense- and sparse-mode multicast use multicast Hello packets to sustain neighbors. The requirement to build a static GRE/IPSec tunnel for each potential multicast peer obviously limits the scalability of the architecture. Every spoke must participate in PIM Hello exchanges with every other spoke; therefore, every GRE/IPSec tunnel will be active in order to maintain the PIM adjacencies. Scalability is further constrained by the fact that each spoke must establish its multicast peers simultaneously upon booting. Obviously, we need to find more efficient topologies for multicast. In this scenario, the configuration of multicast on the GRE tunnel interfaces forces the establishment of all the GRE/IPSec tunnels. Each spoke has assumed the role of a "hub" in this persistent full mesh. Perhaps the spoke has sufficient resources to manage the persistent full mesh; however, that is rarely the case as the VPN becomes sufficiently large. If you look at the state of a spoke before and after the application of multicast routing, you can see that all of the GRE/IPSec tunnels transition to an active state. Example 8-3 shows the configuration of the spoke used in the GRE/IPSec full mesh with the addition of multicast routing. Example 8-3. Spoke GRE/IPSec Temporal Full Meshspoke-1-west#show run interface tunnel 1 interface Tunnel1 description Tunnel to vpn-gw1-west ip address 10.2.2.2 255.255.255.252 ip pim sparse-mode tunnel source 9.1.1.22 tunnel destination 9.1.1.10 tunnel protection ipsec profile dmvpn spoke-1-west#show run interface tunnel 2 interface Tunnel2 ip address 10.2.2.9 255.255.255.252 ip pim sparse-mode tunnel source 9.1.1.22 tunnel destination 9.1.1.138 tunnel protection ipsec profile dmvpn spoke-1-west#show run | include ip route ! Default route to the backbone ip route 0.0.0.0 0.0.0.0 9.1.1.21 ! Generic route for VPN via Hub ip route 10.0.0.0 255.0.0.0 10.2.2.1 ! Explicit route for VPN Subnet at spoke-2-west ip route 10.0.66.0 255.255.255.0 10.2.2.10 Example 8-4 shows the state of the GRE/IPSec tunnels once the multicast is applied. Example 8-4. Spoke GRE/IPSec and Multicast State on Temporal Full Meshspoke-1-west#show ip pim neighbor PIM Neighbor Table Neighbor Interface Uptime/Expires Ver DR Address Prio/Mode 10.2.2.1 Tunnel1 07:35:34/00:01:43 v2 1 / S 10.2.2.10 Tunnel2 07:27:24/00:01:30 v2 1 / S spoke-1-west#show crypto isakmp sa dst src state conn-id slot 9.1.1.22 9.1.1.138 QM_IDLE 27 0 9.1.1.10 9.1.1.22 QM_IDLE 19 0 You can see that the entire set of GRE tunnels will be active in order to pass multicast Hello packets. The example shows that Tunnel2 between SPOKE-1-WEST and SPOKE-2-WEST will remain active because the PIM process refreshes the neighbor adjacency every 30 seconds. The multicast process assesses every possible path to determine a path's relevance to each potential multicast source. This places a tremendous burden on the CPE, especially in large full-mesh networks. DMVPN and MulticastChapter 7, "Auto-Configuration Architectures for Site-to-Site IPSec VPNs," that the DMVPN architecture was designed to accommodate resource-constrained spokes in large temporal full-mesh networks. Next, you'll consider the implications of applying the multicast process on the mGRE interface used in the DMVPN architecture. At this point, you know that the GRE tunnels are capable of carrying multicast traffic such as OSPF and EIGRP routing protocols. When OSPF and EIGRP processes are assigned to the mGRE interface in DMVPN, you need to prevent the establishment of tunnels to all the spokes. Note that the configuration of the tunnel interface maps multicast traffic to the hub. Example 8-5 highlights the fact that the Next Hop Resolution Protocol (NHRP) will map any multicast traffic towards the NHRP server. Example 8-5. Multicast Mapping on DMVPN Multipoint Interfaces! interface Tunnel0 ip address 10.2.0.2 255.255.255.0 no ip redirects ip mtu 1400 ip pim sparse-mode ip nhrp authentication cisco ip nhrp map multicast 9.1.1.10 ip nhrp map 10.2.0.1 9.1.1.10 ip nhrp network-id 100 ip nhrp holdtime 300 ip nhrp nhs 9.1.1.10 ip nhrp nhs 10.2.0.1 ip ospf network broadcast ip ospf priority 0 delay 1000 tunnel source Serial1/0:0 tunnel mode gre multipoint tunnel key 100 tunnel protection ipsec profile dmvpn end The PIM process assigned to the multipoint interface uses multicast Hello packets to build the PIM adjacency. The multicast Hello is only directed to the hub; therefore, the spoke-to-spoke PIM adjacency is not established. The only time the spoke-to-spoke GRE tunnel is initiated is when unicast packets are sent. The architecture appears to be a temporal full mesh for unicast flows and a hub-and-spoke architecture for multicast flows. Figure 8-14 shows the flow topology for both multicast and unicast traffic in the DMVPN network.
Figure 8-14. Multicast and Unicast Flow over a DMVPN Topology
Unfortunately, the dynamic establishment of spoke-to-spoke GRE/IPSec tunnels cannot be leveraged for multicast traffic. Nevertheless, the architecture scales reasonably well for large networks unless the hub is heavily burdened with multicast replication. Typically, multicast sources (for example, content servers) will reside at the hub site anyway; therefore, the multicast replication at the hub site is unavoidable. Of course, the hub will be burdened with routing adjacencies and IPSec peers in addition to the multicast replication. Fortunately, the number of GRE/IPSec connections is minimized at the spoke. We do find at least one exception to this paradigm. When a spoke serves a multicast source, the receivers at the other spokes will force their spoke gateways to join the multicast tree using a unicast PIM Join message. This message will be sent directly between the spokes, forcing the establishment of the spoke-to-spoke GRE/IPSec connection. The spoke receiving the PIM Join for the multicast source will be able to forward multicast frames only into the multipoint tunnel that subsequently directs the multicast frames to the hub. From a scalability perspective, the spoke servicing a multicast source must be prepared for incoming GRE/IPSec connections from any spoke hosting a receiver of the multicast group. If the receivers are waiting for the multicast source, the spoke hosting the source is likely to receive simultaneous PIM Joins from many spokes hosting receivers. Effectively, the spoke becomes a GRE/IPSec hub for the multicast source and must be prepared to handle the simultaneous initialization of many incoming GRE/IPSec connections. Filtering PIM-SM joins to all sites except the hub prevents the simultaneous initiation of GRE/IPSec connections to a spoke hosting a multicast source. Because the multicast packets are forwarded only to the VPN hub site, the spoke is not burdened with multicast packet replication. Multicast Group SecurityThe previous sections addressed methods of "hiding" the multicast from the native IPSec processing through tunnels and virtual IPSec interfaces. The IETF has issued RFC 3740, "The Multicast Group Security Architecture," as the reference for establishing native multicast security. The new architecture establishes the notion of a Group Security Association (GSA) that is valid among all the peers that belong to the same group. The GSA eliminates the necessity of establishing a full mesh of peer-to-peer relationships (tunnels, IKE, and IPSec SA) between the potential multicast source and destinations. The development of native multicast encryption methods will alleviate the requirement to "hide" the multicast frames from the encryption processes. Note The introduction of a GSA does not necessarily preclude the use of an IPSec SA at the same time. In fact, a GSA is a concept that includes all of the SAs for a group, which may include IPSec SAs. The group security model is based on the premise that a source cannot know the intended recipient a priori. The potential sources and receivers must identify themselves as members of a group. Members of the same group are afforded a common level of trust such that they may exchange data between themselves. Next, you'll examine how the members of a group are identified. Group Security Key ManagementEach member of a security group is provided a set of credentials that allow the member to authenticate its right to join the group. To enable this process, a common reference point is needed, where all the members may convene. The Group Domain of Interpretation (GDOI) protocol (RFC 3547) defines the means for allowing a group member to authenticate with a Group Controller/Key Server (GCKS). Once authenticated and authorized by the GCKS, the group member establishes a secure communication channel in order to exchange policy and key material with the GCKS. The GCKS may provide a common key for the group member in order for the member to encrypt and decrypt data from any of the other group members. Likewise, the GCKS may re-key or revoke keys from members in order to control the validity of group members. In GDOI, the secure communication channel established between group members and the GCKS reuses IKE phase 1. Recall from Chapter 2 that IKE Phase 2 is used to establish the point-to-point IPSec SAs. The GDOI protocol replaces the IKE phase 2 process in order to accommodate the secure distribution of group keys. Figure 8-15 highlights the network architecture associated with the GCKS and the group members.
Figure 8-15. Group Key Management Architecture
We now have the infrastructure in place to identify group members and distribute key material to the appropriate group members. Group Security AssociationThe key management infrastructure allows members to synchronously receive and process traffic flows with a common key. All the members will receive the same key for traffic associated with the group identifier; therefore, any member may encrypt data using the key (and decrypt the traffic using the same key). You must now determine the appropriate key to use to encrypt traffic. The encrypting router must associate a multicast group (or range of multicast groups) with a group key. The multicast traffic is encrypted using a group key distributed as part of the Group Security Association (GSA). The encapsulating security payload provides confidentiality for the original IP packet and payload while the IP source address and multicast group address are preserved in the outer IP header. Figure 8-16 shows the packet structure of the multicast security encrypted packet.
Figure 8-16. Multicast Security Payload
As the packet traverses the multicast-enabled IP core network, the packet may replicate according to the multicast distribution tree (MDT) built using traditional multicast protocols such as PIM. The encrypted packet arrives at the decrypting router, which recognizes the GSA. The decrypting router may use a criterion set to associate the appropriate group key using the most specific match as follows:
Security Parameter Index, Destination, Source Security Parameter Index, Destination Security Parameter Index
At this point, decryption and decapsulation occurs, and the multicast packet continues on the MDT in the clear. Figure 8-17 shows the topological association of two GSAs among various group members.
Figure 8-17. Multicast GSA Data Plane Association
Multicast Group Security SummaryThe multicast security model enables a much more efficient method for distributing encrypted multicast traffic by leveraging the multicast replication of the core IP network. The encrypting gateway is responsible for encrypting the multicast traffic and forwarding it to the core; it is no longer responsible for replication of the multicast packet to every receiving VPN gateway. The group security association allows any valid member of the group to encrypt or decrypt traffic such that the number of security associations is minimized on the VPN gateways. Keep in mind that the group security association doesn't mitigate the need for IPSec SAs to accommodate unicast traffic flows. The primary motivation for using multicast security is to provide an efficient means of encrypting and replicating encrypted multicast traffic. Multicast Encryption SummaryOur analysis of multicast encryption has shown that the overlay tunnel topologies have a significant impact on the creation of the multicast distribution trees. The peer-to-peer nature of IPSec fundamentally conflicts with the communication paradigm induced by multicast. The IETF's effort to improve the relationship between multicast and encryption methods has led to the establishment of a group security model that is fundamentally different from the peer-to-peer model used by IPSec. Research on how to improve the relationship between unicast and multicast security continues using a common security infrastructure.
|