draft-ietf-mboned-dc-deploy-04.txt   draft-ietf-mboned-dc-deploy-05.txt 
MBONED M. McBride MBONED M. McBride
Internet-Draft Huawei Internet-Draft Huawei
Intended status: Informational O. Komolafe Intended status: Informational O. Komolafe
Expires: August 11, 2019 Arista Networks Expires: September 12, 2019 Arista Networks
February 07, 2019 March 11, 2019
Multicast in the Data Center Overview Multicast in the Data Center Overview
draft-ietf-mboned-dc-deploy-04 draft-ietf-mboned-dc-deploy-05
Abstract Abstract
The volume and importance of one-to-many traffic patterns in data The volume and importance of one-to-many traffic patterns in data
centers is likely to increase significantly in the future. Reasons centers is likely to increase significantly in the future. Reasons
for this increase are discussed and then attention is paid to the for this increase are discussed and then attention is paid to the
manner in which this traffic pattern may be judiously handled in data manner in which this traffic pattern may be judiously handled in data
centers. The intuitive solution of deploying conventional IP centers. The intuitive solution of deploying conventional IP
multicast within data centers is explored and evaluated. Thereafter, multicast within data centers is explored and evaluated. Thereafter,
a number of emerging innovative approaches are described before a a number of emerging innovative approaches are described before a
skipping to change at page 1, line 38 skipping to change at page 1, line 38
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet- working documents as Internet-Drafts. The list of current Internet-
Drafts is at https://datatracker.ietf.org/drafts/current/. Drafts is at https://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress." material or to cite them other than as "work in progress."
This Internet-Draft will expire on August 11, 2019. This Internet-Draft will expire on September 12, 2019.
Copyright Notice Copyright Notice
Copyright (c) 2019 IETF Trust and the persons identified as the Copyright (c) 2019 IETF Trust and the persons identified as the
document authors. All rights reserved. document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents Provisions Relating to IETF Documents
(https://trustee.ietf.org/license-info) in effect on the date of (https://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents publication of this document. Please review these documents
skipping to change at page 2, line 23 skipping to change at page 2, line 23
2. Reasons for increasing one-to-many traffic patterns . . . . . 3 2. Reasons for increasing one-to-many traffic patterns . . . . . 3
2.1. Applications . . . . . . . . . . . . . . . . . . . . . . 3 2.1. Applications . . . . . . . . . . . . . . . . . . . . . . 3
2.2. Overlays . . . . . . . . . . . . . . . . . . . . . . . . 5 2.2. Overlays . . . . . . . . . . . . . . . . . . . . . . . . 5
2.3. Protocols . . . . . . . . . . . . . . . . . . . . . . . . 5 2.3. Protocols . . . . . . . . . . . . . . . . . . . . . . . . 5
3. Handling one-to-many traffic using conventional multicast . . 6 3. Handling one-to-many traffic using conventional multicast . . 6
3.1. Layer 3 multicast . . . . . . . . . . . . . . . . . . . . 6 3.1. Layer 3 multicast . . . . . . . . . . . . . . . . . . . . 6
3.2. Layer 2 multicast . . . . . . . . . . . . . . . . . . . . 6 3.2. Layer 2 multicast . . . . . . . . . . . . . . . . . . . . 6
3.3. Example use cases . . . . . . . . . . . . . . . . . . . . 8 3.3. Example use cases . . . . . . . . . . . . . . . . . . . . 8
3.4. Advantages and disadvantages . . . . . . . . . . . . . . 9 3.4. Advantages and disadvantages . . . . . . . . . . . . . . 9
4. Alternative options for handling one-to-many traffic . . . . 9 4. Alternative options for handling one-to-many traffic . . . . 9
4.1. Minimizing traffic volumes . . . . . . . . . . . . . . . 10 4.1. Minimizing traffic volumes . . . . . . . . . . . . . . . 9
4.2. Head end replication . . . . . . . . . . . . . . . . . . 10 4.2. Head end replication . . . . . . . . . . . . . . . . . . 10
4.3. BIER . . . . . . . . . . . . . . . . . . . . . . . . . . 11 4.3. BIER . . . . . . . . . . . . . . . . . . . . . . . . . . 11
4.4. Segment Routing . . . . . . . . . . . . . . . . . . . . . 12 4.4. Segment Routing . . . . . . . . . . . . . . . . . . . . . 12
5. Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . 12 5. Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . 12
6. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 13 6. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 12
7. Security Considerations . . . . . . . . . . . . . . . . . . . 13 7. Security Considerations . . . . . . . . . . . . . . . . . . . 13
8. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 13 8. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 13
9. References . . . . . . . . . . . . . . . . . . . . . . . . . 13 9. References . . . . . . . . . . . . . . . . . . . . . . . . . 13
9.1. Normative References . . . . . . . . . . . . . . . . . . 13 9.1. Normative References . . . . . . . . . . . . . . . . . . 13
9.2. Informative References . . . . . . . . . . . . . . . . . 13 9.2. Informative References . . . . . . . . . . . . . . . . . 13
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 15 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 15
1. Introduction 1. Introduction
The volume and importance of one-to-many traffic patterns in data The volume and importance of one-to-many traffic patterns in data
skipping to change at page 3, line 48 skipping to change at page 3, line 48
disseminated to workers distributed throughout the data center which disseminated to workers distributed throughout the data center which
may be subsequently polled for status updates. The emergence of such may be subsequently polled for status updates. The emergence of such
applications means there is likely to be an increase in one-to-many applications means there is likely to be an increase in one-to-many
traffic flows with the increasing dominance of East-West traffic. traffic flows with the increasing dominance of East-West traffic.
The TV broadcast industry is another potential future source of The TV broadcast industry is another potential future source of
applications with one-to-many traffic patterns in data centers. The applications with one-to-many traffic patterns in data centers. The
requirement for robustness, stability and predicability has meant the requirement for robustness, stability and predicability has meant the
TV broadcast industry has traditionally used TV-specific protocols, TV broadcast industry has traditionally used TV-specific protocols,
infrastructure and technologies for transmitting video signals infrastructure and technologies for transmitting video signals
between cameras, studios, mixers, encoders, servers etc. However, between end points such as cameras, monitors, mixers, graphics
the growing cost and complexity of supporting this approach, devices and video servers. However, the growing cost and complexity
especially as the bit rates of the video signals increase due to of supporting this approach, especially as the bit rates of the video
demand for formats such as 4K-UHD and 8K-UHD, means there is a signals increase due to demand for formats such as 4K-UHD and 8K-UHD,
consensus that the TV broadcast industry will transition from means there is a consensus that the TV broadcast industry will
industry-specific transmission formats (e.g. SDI, HD-SDI) over TV- transition from industry-specific transmission formats (e.g. SDI,
specific infrastructure to using IP-based infrastructure. The HD-SDI) over TV-specific infrastructure to using IP-based
development of pertinent standards by the SMPTE, along with the infrastructure. The development of pertinent standards by the SMPTE,
increasing performance of IP routers, means this transition is along with the increasing performance of IP routers, means this
gathering pace. A possible outcome of this transition will be the transition is gathering pace. A possible outcome of this transition
building of IP data centers in broadcast plants. Traffic flows in will be the building of IP data centers in broadcast plants. Traffic
the broadcast industry are frequently one-to-many and so if IP data flows in the broadcast industry are frequently one-to-many and so if
centers are deployed in broadcast plants, it is imperative that this IP data centers are deployed in broadcast plants, it is imperative
traffic pattern is supported efficiently in that infrastructure. In that this traffic pattern is supported efficiently in that
fact, a pivotal consideration for broadcasters considering infrastructure. In fact, a pivotal consideration for broadcasters
transitioning to IP is the manner in which these one-to-many traffic considering transitioning to IP is the manner in which these one-to-
flows will be managed and monitored in a data center with an IP many traffic flows will be managed and monitored in a data center
fabric. with an IP fabric.
One of the few success stories in using conventional IP multicast has One of the few success stories in using conventional IP multicast has
been for disseminating market trading data. For example, IP been for disseminating market trading data. For example, IP
multicast is commonly used today to deliver stock quotes from the multicast is commonly used today to deliver stock quotes from the
stock exchange to financial services provider and then to the stock stock exchange to financial services provider and then to the stock
analysts or brokerages. The network must be designed with no single analysts or brokerages. The network must be designed with no single
point of failure and in such a way that the network can respond in a point of failure and in such a way that the network can respond in a
deterministic manner to any failure. Typically, redundant servers deterministic manner to any failure. Typically, redundant servers
(in a primary/backup or live-live mode) send multicast streams into (in a primary/backup or live-live mode) send multicast streams into
the network, with diverse paths being used across the network. the network, with diverse paths being used across the network.
Another critical requirement is reliability and traceability; Another critical requirement is reliability and traceability;
regulatory and legal requirements means that the producer of the regulatory and legal requirements means that the producer of the
marketing data must know exactly where the flow was sent and be able marketing data may need to know exactly where the flow was sent and
to prove conclusively that the data was received within agreed SLAs. be able to prove conclusively that the data was received within
The stock exchange generating the one-to-many traffic and stock agreed SLAs. The stock exchange generating the one-to-many traffic
analysts/brokerage that receive the traffic will typically have their and stock analysts/brokerage that receive the traffic will typically
own data centers. Therefore, the manner in which one-to-many traffic have their own data centers. Therefore, the manner in which one-to-
patterns are handled in these data centers are extremely important, many traffic patterns are handled in these data centers are extremely
especially given the requirements and constraints mentioned. important, especially given the requirements and constraints
mentioned.
Many data center cloud providers provide publish and subscribe Many data center cloud providers provide publish and subscribe
applications. There can be numerous publishers and subscribers and applications. There can be numerous publishers and subscribers and
many message channels within a data center. With publish and many message channels within a data center. With publish and
subscribe servers, a separate message is sent to each subscriber of a subscribe servers, a separate message is sent to each subscriber of a
publication. With multicast publish/subscribe, only one message is publication. With multicast publish/subscribe, only one message is
sent, regardless of the number of subscribers. In a publish/ sent, regardless of the number of subscribers. In a publish/
subscribe system, client applications, some of which are publishers subscribe system, client applications, some of which are publishers
and some of which are subscribers, are connected to a network of and some of which are subscribers, are connected to a network of
message brokers that receive publications on a number of topics, and message brokers that receive publications on a number of topics, and
skipping to change at page 6, line 13 skipping to change at page 6, line 13
supports multicast. RIFT (Routing in Fat Trees) is a new protocol supports multicast. RIFT (Routing in Fat Trees) is a new protocol
being developed to work efficiently in DC CLOS environments and also being developed to work efficiently in DC CLOS environments and also
is being specified to support multicast addressing and forwarding. is being specified to support multicast addressing and forwarding.
3. Handling one-to-many traffic using conventional multicast 3. Handling one-to-many traffic using conventional multicast
3.1. Layer 3 multicast 3.1. Layer 3 multicast
PIM is the most widely deployed multicast routing protocol and so, PIM is the most widely deployed multicast routing protocol and so,
unsurprisingly, is the primary multicast routing protocol considered unsurprisingly, is the primary multicast routing protocol considered
for use in the data center. There are three potential popular for use in the data center. There are three potential popular modes
flavours of PIM that may be used: PIM-SM [RFC4601], PIM-SSM [RFC4607] of PIM that may be used: PIM-SM [RFC4601], PIM-SSM [RFC4607] or PIM-
or PIM-BIDIR [RFC5015]. It may be said that these different modes of BIDIR [RFC5015]. It may be said that these different modes of PIM
PIM tradeoff the optimality of the multicast forwarding tree for the tradeoff the optimality of the multicast forwarding tree for the
amount of multicast forwarding state that must be maintained at amount of multicast forwarding state that must be maintained at
routers. SSM provides the most efficient forwarding between sources routers. SSM provides the most efficient forwarding between sources
and receivers and thus is most suitable for applications with one-to- and receivers and thus is most suitable for applications with one-to-
many traffic patterns. State is built and maintained for each (S,G) many traffic patterns. State is built and maintained for each (S,G)
flow. Thus, the amount of multicast forwarding state held by routers flow. Thus, the amount of multicast forwarding state held by routers
in the data center is proportional to the number of sources and in the data center is proportional to the number of sources and
groups. At the other end of the spectrum, BIDIR is the most groups. At the other end of the spectrum, BIDIR is the most
efficient shared tree solution as one tree is built for all (S,G)s, efficient shared tree solution as one tree is built for all flows,
therefore minimizing the amount of state. This state reduction is at therefore minimizing the amount of state. This state reduction is at
the expense of optimal forwarding path between sources and receivers. the expense of optimal forwarding path between sources and receivers.
This use of a shared tree makes BIDIR particularly well-suited for This use of a shared tree makes BIDIR particularly well-suited for
applications with many-to-many traffic patterns, given that the applications with many-to-many traffic patterns, given that the
amount of state is uncorrelated to the number of sources. SSM and amount of state is uncorrelated to the number of sources. SSM and
BIDIR are optimizations of PIM-SM. PIM-SM is still the most widely BIDIR are optimizations of PIM-SM. PIM-SM is the most widely
deployed multicast routing protocol. PIM-SM can also be the most deployed multicast routing protocol. PIM-SM can also be the most
complex. PIM-SM relies upon a RP (Rendezvous Point) to set up the complex. PIM-SM relies upon a RP (Rendezvous Point) to set up the
multicast tree and subsequently there is the option of switching to multicast tree and subsequently there is the option of switching to
the SPT (shortest path tree), similar to SSM, or staying on the the SPT (shortest path tree), similar to SSM, or staying on the
shared tree, similar to BIDIR. shared tree, similar to BIDIR.
3.2. Layer 2 multicast 3.2. Layer 2 multicast
With IPv4 unicast address resolution, the translation of an IP With IPv4 unicast address resolution, the translation of an IP
address to a MAC address is done dynamically by ARP. With multicast address to a MAC address is done dynamically by ARP. With multicast
address resolution, the mapping from a multicast IPv4 address to a address resolution, the mapping from a multicast IPv4 address to a
multicast MAC address is done by assigning the low-order 23 bits of multicast MAC address is done by assigning the low-order 23 bits of
the multicast IPv4 address to fill the low-order 23 bits of the the multicast IPv4 address to fill the low-order 23 bits of the
multicast MAC address. Each IPv4 multicast address has 28 unique multicast MAC address. Each IPv4 multicast address has 28 unique
bits (the multicast address range is 224.0.0.0/12) therefore mapping bits (the multicast address range is 224.0.0.0/12) therefore mapping
a multicast IP address to a MAC address ignores 5 bits of the IP a multicast IP address to a MAC address ignores 5 bits of the IP
address. Hence, groups of 32 multicast IP addresses are mapped to address. Hence, groups of 32 multicast IP addresses are mapped to
the same MAC address meaning a a multicast MAC address cannot be the same MAC address. And so a a multicast MAC address cannot be
uniquely mapped to a multicast IPv4 address. Therefore, planning is uniquely mapped to a multicast IPv4 address. Therefore, planning is
required within an organization to choose IPv4 multicast addresses required within an organization to choose IPv4 multicast addresses
judiciously in order to avoid address aliasing. When sending IPv6 judiciously in order to avoid address aliasing. When sending IPv6
multicast packets on an Ethernet link, the corresponding destination multicast packets on an Ethernet link, the corresponding destination
MAC address is a direct mapping of the last 32 bits of the 128 bit MAC address is a direct mapping of the last 32 bits of the 128 bit
IPv6 multicast address into the 48 bit MAC address. It is possible IPv6 multicast address into the 48 bit MAC address. It is possible
for more than one IPv6 multicast address to map to the same 48 bit for more than one IPv6 multicast address to map to the same 48 bit
MAC address. MAC address.
The default behaviour of many hosts (and, in fact, routers) is to The default behaviour of many hosts (and, in fact, routers) is to
skipping to change at page 7, line 28 skipping to change at page 7, line 28
the data link layer which would add the layer 2 encapsulation, using the data link layer which would add the layer 2 encapsulation, using
the MAC address derived in the manner previously discussed. the MAC address derived in the manner previously discussed.
When this Ethernet frame with a multicast MAC address is received by When this Ethernet frame with a multicast MAC address is received by
a switch configured to forward multicast traffic, the default a switch configured to forward multicast traffic, the default
behaviour is to flood it to all the ports in the layer 2 segment. behaviour is to flood it to all the ports in the layer 2 segment.
Clearly there may not be a receiver for this multicast group present Clearly there may not be a receiver for this multicast group present
on each port and IGMP snooping is used to avoid sending the frame out on each port and IGMP snooping is used to avoid sending the frame out
of ports without receivers. of ports without receivers.
IGMP snooping, with proxy reporting or report suppression, actively A switch running IGMP snooping listens to the IGMP messages exchanged
filters IGMP packets in order to reduce load on the multicast router between hosts and the router in order to identify which ports have
by ensuring only the minimal quantity of information is sent. The active receivers for a specific multicast group, allowing the
switch is trying to ensure the router has only a single entry for the forwarding of multicast frames to be suitably constrained. Normally,
group, regardless of the number of active listeners. If there are the multicast router will generate IGMP queries to which the hosts
two active listeners in a group and the first one leaves, then the send IGMP reports in response. However, number of optimizations in
switch determines that the router does not need this information which a switch generates IGMP queries (and so appears to be the
since it does not affect the status of the group from the router's router from the hosts' perspective) and/or generates IGMP reports
point of view. However the next time there is a routine query from (and so appears to be hosts from the router's perspectve) are
the router the switch will forward the reply from the remaining host, commonly used to improve the performance by reducing the amount of
to prevent the router from believing there are no active listeners. state maintained at the router, suppressing superfluous IGMP messages
It follows that in active IGMP snooping, the router will generally and improving responsivenss when hosts join/leave the group.
only know about the most recently joined member of the group.
In order for IGMP and thus IGMP snooping to function, a multicast
router must exist on the network and generate IGMP queries. The
tables (holding the member ports for each multicast group) created
for snooping are associated with the querier. Without a querier the
tables are not created and snooping will not work. Furthermore, IGMP
general queries must be unconditionally forwarded by all switches
involved in IGMP snooping. Some IGMP snooping implementations
include full querier capability. Others are able to proxy and
retransmit queries from the multicast router.
Multicast Listener Discovery (MLD) [RFC 2710] [RFC 3810] is used by Multicast Listener Discovery (MLD) [RFC 2710] [RFC 3810] is used by
IPv6 routers for discovering multicast listeners on a directly IPv6 routers for discovering multicast listeners on a directly
attached link, performing a similar function to IGMP in IPv4 attached link, performing a similar function to IGMP in IPv4
networks. MLDv1 [RFC 2710] is similar to IGMPv2 and MLDv2 [RFC 3810] networks. MLDv1 [RFC 2710] is similar to IGMPv2 and MLDv2 [RFC 3810]
[RFC 4604] similar to IGMPv3. However, in contrast to IGMP, MLD does [RFC 4604] similar to IGMPv3. However, in contrast to IGMP, MLD does
not send its own distinct protocol messages. Rather, MLD is a not send its own distinct protocol messages. Rather, MLD is a
subprotocol of ICMPv6 [RFC 4443] and so MLD messages are a subset of subprotocol of ICMPv6 [RFC 4443] and so MLD messages are a subset of
ICMPv6 messages. MLD snooping works similarly to IGMP snooping, ICMPv6 messages. MLD snooping works similarly to IGMP snooping,
described earlier. described earlier.
skipping to change at page 10, line 19 skipping to change at page 10, line 13
volume of such traffic. volume of such traffic.
It was previously mentioned in Section 2 that the three main causes It was previously mentioned in Section 2 that the three main causes
of one-to-many traffic in data centers are applications, overlays and of one-to-many traffic in data centers are applications, overlays and
protocols. While, relatively speaking, little can be done about the protocols. While, relatively speaking, little can be done about the
volume of one-to-many traffic generated by applications, there is volume of one-to-many traffic generated by applications, there is
more scope for attempting to reduce the volume of such traffic more scope for attempting to reduce the volume of such traffic
generated by overlays and protocols. (And often by protocols within generated by overlays and protocols. (And often by protocols within
overlays.) This reduction is possible by exploiting certain overlays.) This reduction is possible by exploiting certain
characteristics of data center networks: fixed and regular topology, characteristics of data center networks: fixed and regular topology,
owned and exclusively controlled by single organization, well-known single administrative control, consistent hardware and software,
overlay encapsulation endpoints etc. well-known overlay encapsulation endpoints and so on.
A way of minimizing the amount of one-to-many traffic that traverses A way of minimizing the amount of one-to-many traffic that traverses
the data center fabric is to use a centralized controller. For the data center fabric is to use a centralized controller. For
example, whenever a new VM is instantiated, the hypervisor or example, whenever a new VM is instantiated, the hypervisor or
encapsulation endpoint can notify a centralized controller of this encapsulation endpoint can notify a centralized controller of this
new MAC address, the associated virtual network, IP address etc. The new MAC address, the associated virtual network, IP address etc. The
controller could subsequently distribute this information to every controller could subsequently distribute this information to every
encapsulation endpoint. Consequently, when any endpoint receives an encapsulation endpoint. Consequently, when any endpoint receives an
ARP request from a locally attached VM, it could simply consult its ARP request from a locally attached VM, it could simply consult its
local copy of the information distributed by the controller and local copy of the information distributed by the controller and
skipping to change at page 12, line 43 skipping to change at page 12, line 41
increases, conventional IP multicast is likely to become increasingly increases, conventional IP multicast is likely to become increasingly
unattractive for deployment in data centers for a number of reasons, unattractive for deployment in data centers for a number of reasons,
mostly pertaining its inherent relatively poor scalability and mostly pertaining its inherent relatively poor scalability and
inability to exploit characteristics of data center network inability to exploit characteristics of data center network
architectures. Hence, even though IGMP/MLD is likely to remain the architectures. Hence, even though IGMP/MLD is likely to remain the
most popular manner in which end hosts signal interest in joining a most popular manner in which end hosts signal interest in joining a
multicast group, it is unlikely that this multicast traffic will be multicast group, it is unlikely that this multicast traffic will be
transported over the data center IP fabric using a multicast transported over the data center IP fabric using a multicast
distribution tree built by PIM. Rather, approaches which exploit distribution tree built by PIM. Rather, approaches which exploit
characteristics of data center network architectures (e.g. fixed and characteristics of data center network architectures (e.g. fixed and
regular topology, owned and exclusively controlled by single regular topology, single administrative control, consistent hardware
organization, well-known overlay encapsulation endpoints etc.) are and software, well-known overlay encapsulation endpoints etc.) are
better placed to deliver one-to-many traffic in data centers, better placed to deliver one-to-many traffic in data centers,
especially when judiciously combined with a centralized controller especially when judiciously combined with a centralized controller
and/or a distributed control plane (particularly one based on BGP- and/or a distributed control plane (particularly one based on BGP-
EVPN). EVPN).
6. IANA Considerations 6. IANA Considerations
This memo includes no request to IANA. This memo includes no request to IANA.
7. Security Considerations 7. Security Considerations
skipping to change at page 13, line 35 skipping to change at page 13, line 31
[I-D.ietf-bier-use-cases] [I-D.ietf-bier-use-cases]
Kumar, N., Asati, R., Chen, M., Xu, X., Dolganow, A., Kumar, N., Asati, R., Chen, M., Xu, X., Dolganow, A.,
Przygienda, T., Gulko, A., Robinson, D., Arya, V., and C. Przygienda, T., Gulko, A., Robinson, D., Arya, V., and C.
Bestler, "BIER Use Cases", draft-ietf-bier-use-cases-06 Bestler, "BIER Use Cases", draft-ietf-bier-use-cases-06
(work in progress), January 2018. (work in progress), January 2018.
[I-D.ietf-nvo3-geneve] [I-D.ietf-nvo3-geneve]
Gross, J., Ganga, I., and T. Sridhar, "Geneve: Generic Gross, J., Ganga, I., and T. Sridhar, "Geneve: Generic
Network Virtualization Encapsulation", draft-ietf- Network Virtualization Encapsulation", draft-ietf-
nvo3-geneve-06 (work in progress), March 2018. nvo3-geneve-11 (work in progress), March 2019.
[I-D.ietf-nvo3-vxlan-gpe] [I-D.ietf-nvo3-vxlan-gpe]
Maino, F., Kreeger, L., and U. Elzur, "Generic Protocol Maino, F., Kreeger, L., and U. Elzur, "Generic Protocol
Extension for VXLAN", draft-ietf-nvo3-vxlan-gpe-06 (work Extension for VXLAN", draft-ietf-nvo3-vxlan-gpe-06 (work
in progress), April 2018. in progress), April 2018.
[I-D.ietf-spring-segment-routing] [I-D.ietf-spring-segment-routing]
Filsfils, C., Previdi, S., Ginsberg, L., Decraene, B., Filsfils, C., Previdi, S., Ginsberg, L., Decraene, B.,
Litkowski, S., and R. Shakir, "Segment Routing Litkowski, S., and R. Shakir, "Segment Routing
Architecture", draft-ietf-spring-segment-routing-15 (work Architecture", draft-ietf-spring-segment-routing-15 (work
 End of changes. 15 change blocks. 
66 lines changed or deleted 56 lines changed or added

This html diff was produced by rfcdiff 1.47. The latest version is available from http://tools.ietf.org/tools/rfcdiff/