MBONED                                                        M. McBride
Internet-Draft                                                    Huawei
Intended status: Informational                         February 28, 2018                               O. Komolafe
Expires: September 1, December 31, 2018                               Arista Networks
                                                           June 29, 2018

                 Multicast in the Data Center Overview
                     draft-ietf-mboned-dc-deploy-02
                     draft-ietf-mboned-dc-deploy-03

Abstract

   There has been much interest in issues surrounding massive amounts

   The volume and importance of
   hosts one-to-many traffic patterns in the data center.  These issues include the prevalent use of
   IP Multicast within the Data Center.  Its important to understand how
   IP Multicast
   centers is being deployed likely to increase significantly in the Data Center to be able future.  Reasons
   for this increase are discussed and then attention is paid to
   understand the surrounding issues with doing so.  This document
   provides a quick survey of uses
   manner in which this traffic pattern may be judiously handled in data
   centers.  The intuitive solution of deploying conventional IP
   multicast in the within data center centers is explored and
   should serve as an aid to further discussion evaluated.  Thereafter,
   a number of issues related to
   large amounts emerging innovative approaches are described before a
   number of multicast in the data center. recommendations are made.

Status of This Memo

   This Internet-Draft is submitted in full conformance with the
   provisions of BCP 78 and BCP 79.

   Internet-Drafts are working documents of the Internet Engineering
   Task Force (IETF).  Note that other groups may also distribute
   working documents as Internet-Drafts.  The list of current Internet-
   Drafts is at https://datatracker.ietf.org/drafts/current/.

   Internet-Drafts are draft documents valid for a maximum of six months
   and may be updated, replaced, or obsoleted by other documents at any
   time.  It is inappropriate to use Internet-Drafts as reference
   material or to cite them other than as "work in progress."

   This Internet-Draft will expire on September 1, December 31, 2018.

Copyright Notice

   Copyright (c) 2018 IETF Trust and the persons identified as the
   document authors.  All rights reserved.

   This document is subject to BCP 78 and the IETF Trust's Legal
   Provisions Relating to IETF Documents
   (https://trustee.ietf.org/license-info) in effect on the date of
   publication of this document.  Please review these documents
   carefully, as they describe your rights and restrictions with respect
   to this document.  Code Components extracted from this document must
   include Simplified BSD License text as described in Section 4.e of
   the Trust Legal Provisions and are provided without warranty as
   described in the Simplified BSD License.

Table of Contents

   1.  Introduction  . . . . . . . . . . . . . . . . . . . . . . . .   2
     1.1.  Requirements Language . . . . . . . . . . . . . . . . . .   3
   2.  Multicast Applications in the Data Center  Reasons for increasing one-to-many traffic patterns . . . . .   3
     2.1.  Applications  . . . . . . .   3
     2.1.  Client-Server Applications . . . . . . . . . . . . . . .   3
     2.2.  Non Client-Server Multicast Applications  Overlays  . . . . . . . . . . . . . .   4
   3.  L2 Multicast Protocols in the Data Center . . . . . . . . . .   5
   4.  L3 Multicast
     2.3.  Protocols in the Data Center . . . . . . . . . .   6
   5.  Challenges of using multicast in the Data Center . . . . . .   7
   6. . . . . . . . .   5
   3.  Handling one-to-many traffic using conventional multicast . .   5
     3.1.  Layer 3 / multicast . . . . . . . . . . . . . . . . . . . .   6
     3.2.  Layer 2 Topological Variations multicast . . . . . . . . . .   8
   7.  Address Resolution . . . . . . . . . .   6
     3.3.  Example use cases . . . . . . . . . . .   9
     7.1.  Solicited-node Multicast Addresses for IPv6 address
           resolution . . . . . . . . .   8
     3.4.  Advantages and disadvantages  . . . . . . . . . . . . . .   9
     7.2.  Direct Mapping
   4.  Alternative options for Multicast address resolution . handling one-to-many traffic  . . . .   9
   8.  IANA Considerations
     4.1.  Minimizing traffic volumes  . . . . . . . . . . . . . . .   9
     4.2.  Head end replication  . . . . . . . . . . . . . . . . . .  10
   9.  Security Considerations
     4.3.  BIER  . . . . . . . . . . . . . . . . . . .  10
   10. Acknowledgements . . . . . . .  11
     4.4.  Segment Routing . . . . . . . . . . . . . . .  10
   11. References . . . . . .  12
   5.  Conclusions . . . . . . . . . . . . . . . . . . .  10
     11.1.  Normative References . . . . . .  12
   6.  IANA Considerations . . . . . . . . . . . .  10
     11.2.  Informative References . . . . . . . . .  12
   7.  Security Considerations . . . . . . . .  10
   Author's Address . . . . . . . . . . .  13
   8.  Acknowledgements  . . . . . . . . . . . . .  10

1.  Introduction

   Data center servers often use IP Multicast to send data to clients or
   other application servers.  IP Multicast . . . . . . . . .  13
   9.  References  . . . . . . . . . . . . . . . . . . . . . . . . .  13
     9.1.  Normative References  . . . . . . . . . . . . . . . . . .  13
     9.2.  Informative References  . . . . . . . . . . . . . . . . .  13
   Authors' Addresses  . . . . . . . . . . . . . . . . . . . . . . .  15

1.  Introduction

   The volume and importance of one-to-many traffic patterns in data
   centers is expected likely to help conserve
   bandwidth increase significantly in the data center and reduce future.  Reasons
   for this increase include the load on servers.  IP
   Multicast is also a key component in several data center overlay
   solutions.  Increased reliance on multicast, nature of the traffic generated by
   applications hosted in next generation the data
   centers, requires higher performance and capacity especially from center, the
   switches.  If multicast is to continue need to be handle broadcast,
   unknown unicast and multicast (BUM) traffic within the overlay
   technologies used in to support multi-tenancy at scale, and the use of
   certain protocols that traditionally require one-to-many control
   message exchanges.  These trends, allied with the expectation that
   future highly virtualized data center,
   it centers must scale well within and support communication
   between datacenters.  There has been
   much interest in issues surrounding massive amounts potentially thousands of hosts participants, may lead to the
   natural assumption that IP multicast will be widely used in data
   centers, specifically given the bandwidth savings it potentially
   offers.  However, such an assumption would be wrong.  In fact, there
   is widespread reluctance to enable IP multicast in data center.  There was centers for a lengthy discussion, in the now closed ARMD
   WG, involving
   number of reasons, mostly pertaining to concerns about its
   scalability and reliability.

   This draft discusses some of the issues with address resolution main drivers for non ARP/ND
   multicast the increasing
   volume and importance of one-to-many traffic patterns in data
   centers.  This document provides a quick
   survey of multicast in  Thereafter, the data center and should serve as an aid manner in which conventional IP multicast
   may be used to
   further discussion handle this traffic pattern is discussed and some of issues related to multicast in
   the data center.

   ARP/ND issues are not addressed in associated challenges highlighted.  Following this document except to explain
   how address resolution occurs with multicast. discussion, a
   number of alternative emerging approaches are introduced, before
   concluding by discussing key trends and making a number of
   recommendations.

1.1.  Requirements Language

   The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
   "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
   document are to be interpreted as described in RFC 2119.

2.  Multicast  Reasons for increasing one-to-many traffic patterns

2.1.  Applications in

   Key trends suggest that the Data Center

   There are many data center operators who do not deploy Multicast in
   their networks for scalability and stability reasons.  There are also
   many operators for whom multicast is a critical protocol within their
   network and is enabled on their data center switches and routers.
   For this latter group, there are several uses of multicast in their
   data centers.  An understanding nature of the uses applications likely to
   dominate future highly-virtualized multi-tenant data centers will
   produce large volumes of that multicast one-to-many traffic.  For example, it is
   important
   well-known that traffic flows in order data centers have evolved from being
   predominantly North-South (e.g. client-server) to predominantly East-
   West (e.g.  distributed computation).  This change has led to properly support these applications in the ever
   evolving data centers.  If, for instance,
   consensus that topologies such as the majority of Leaf/Spine, that are easier to
   scale in the
   applications East-West direction, are discovering/signaling each other, using multicast,
   there may be better ways suited to support them then using multicast.  If,
   however, the multicasting of data is occurring in large volumes,
   there is a need for good data
   center overlay multicast support.  The
   applications either fall into the category of those that leverage L2
   multicast for discovery or the future.  This increase in East-West traffic flows
   results from VMs often having to exchange numerous messages between
   themselves as part of those that executing a specific workload.  For example, a
   computational workload could require L3 support and
   likely span multiple subnets.

2.1.  Client-Server Applications

   IPTV servers use multicast data, or an executable, to deliver content from be
   disseminated to workers distributed throughout the data center to
   end users.  IPTV which
   may be subsequently polled for status updates.  The emergence of such
   applications means there is typically a one likely to many application where the
   hosts are configured for IGMPv3, be an increase in one-to-many
   traffic flows with the switches are configured increasing dominance of East-West traffic.

   The TV broadcast industry is another potential future source of
   applications with
   IGMP snooping, one-to-many traffic patterns in data centers.  The
   requirement for robustness, stability and the routers are running PIM-SSM mode.  Often
   redundant predicability has meant the
   TV broadcast industry has traditionally used TV-specific protocols,
   infrastructure and technologies for transmitting video signals
   between cameras, studios, mixers, encoders, servers are sending multicast streams into etc.  However,
   the network growing cost and complexity of supporting this approach,
   especially as the network is forwarding bit rates of the data across diverse paths.

   Windows Media servers send multicast streaming video signals increase due to clients.  Windows
   Media Services streams to an IP multicast address
   demand for formats such as 4K-UHD and all clients
   subscribe 8K-UHD, means there is a
   consensus that the TV broadcast industry will transition from
   industry-specific transmission formats (e.g.  SDI, HD-SDI) over TV-
   specific infrastructure to using IP-based infrastructure.  The
   development of pertinent standards by the SMPTE, along with the
   increasing performance of IP routers, means this transition is
   gathering pace.  A possible outcome of this transition will be the
   building of IP address to receive data centers in broadcast plants.  Traffic flows in
   the same stream.  This allows broadcast industry are frequently one-to-many and so if IP data
   centers are deployed in broadcast plants, it is imperative that this
   traffic pattern is supported efficiently in that infrastructure.  In
   fact, a single stream pivotal consideration for broadcasters considering
   transitioning to IP is the manner in which these one-to-many traffic
   flows will be played simultaneously by multiple clients managed and
   thus reducing bandwidth utilization.

   Market monitored in a data relies extensively on center with an IP
   fabric.

   Arguably one of the (few?) success stories in using conventional IP
   multicast has been for disseminating market trading data.  For
   example, IP multicast is commonly used today to deliver stock quotes
   from the data center stock exchange to a financial services provider and then to
   the stock analysts.  The most critical requirement of a multicast
   trading floor is that it be highly available. analysts or brokerages.  The network must be designed with
   no single point of failure and in such a way that the network can
   respond in a deterministic manner to any failure.  Typically  Typically,
   redundant servers (in a primary/backup or live live live-live mode) are sending send
   multicast streams into the network network, with diverse paths being used
   across the network.  Another critical requirement is reliability and
   traceability; regulatory and legal requirements means that the
   producer of the marketing data must know exactly where the flow was
   sent and be able to prove conclusively that the data was received
   within agreed SLAs.  The stock exchange generating the one-to-many
   traffic and stock analysts/brokerage that receive the traffic will
   typically have their own data centers.  Therefore, the manner in
   which one-to-many traffic patterns are handled in these data centers
   are extremely important, especially given the requirements and
   constraints mentioned.

   Many data center cloud providers provide publish and subscribe
   applications.  There can be numerous publishers and the network is forwarding the
   data across diverse paths (when duplicate subscribers and
   many message channels within a data is sent by multiple
   servers). center.  With publish and
   subscribe servers, a separate message is sent to each subscriber of a
   publication.  With multicast publish/subscribe, only one message is
   sent, regardless of the number of subscribers.  In a publish/subscribe publish/
   subscribe system, client applications, some of which are publishers
   and some of which are subscribers, are connected to a network of
   message brokers that receive publications on a number of topics, and
   send the publications on to the subscribers for those topics.  The
   more subscribers there are in the publish/subscribe system, the
   greater the improvement to network utilization there might be with
   multicast.

2.2.  Non Client-Server Multicast Applications

   Routers, running Virtual Routing Redundancy Protocol (VRRP),
   communicate with one another using  Overlays

   The proposed architecture for supporting large-scale multi-tenancy in
   highly virtualized data centers [RFC8014] consists of a tenant's VMs
   distributed across the data center connected by a virtual network
   known as the overlay network.  A number of different technologies
   have been proposed for realizing the overlay network, including VXLAN
   [RFC7348], VXLAN-GPE [I-D.ietf-nvo3-vxlan-gpe], NVGRE [RFC7637] and
   GENEVE [I-D.ietf-nvo3-geneve].  The often fervent and arguably
   partisan debate about the relative merits of these overlay
   technologies belies the fact that, conceptually, it may be said that
   these overlays typically simply provide a means to encapsulate and
   tunnel Ethernet frames from the VMs over the data center IP fabric,
   thus emulating a layer 2 segment between the VMs.  Consequently, the
   VMs believe and behave as if they are connected to the tenant's other
   VMs by a conventional layer 2 segment, regardless of their physical
   location within the data center.  Naturally, in a layer 2 segment,
   point to multi-point traffic can result from handling BUM (broadcast,
   unknown unicast and multicast) traffic.  And, compounding this issue
   within data centers, since the tenant's VMs attached to the emulated
   segment may be dispersed throughout the data center, the BUM traffic
   may need to traverse the data center fabric.  Hence, regardless of
   the overlay technology used, due consideration must be given to
   handling BUM traffic, forcing the data center operator to consider
   the manner in which one-to-many communication is handled within the
   IP fabric.

2.3.  Protocols

   Conventionally, some key networking protocols used in data centers
   require one-to-many communication.  For example, ARP and ND use
   broadcast and multicast address.  VRRP packets messages within IPv4 and IPv6 networks
   respectively to discover MAC address to IP address mappings.
   Furthermore, when these protocols are sent, encapsulated running within an overlay
   network, then it essential to ensure the messages are delivered to
   all the hosts on the emulated layer 2 segment, regardless of physical
   location within the data center.  The challenges associated with
   optimally delivering ARP and ND messages in data centers has
   attracted lots of attention [RFC6820].  Popular approaches in IP packets, use
   mostly seek to 224.0.0.18.  A failure exploit characteristics of data center networks to
   receive a
   avoid having to broadcast/multicast these messages, as discussed in
   Section 4.1.

3.  Handling one-to-many traffic using conventional multicast packet from
3.1.  Layer 3 multicast

   PIM is the master router most widely deployed multicast routing protocol and so,
   unsurprisingly, is the primary multicast routing protocol considered
   for a period longer
   than use in the data center.  There are three times potential popular
   flavours of PIM that may be used: PIM-SM [RFC4601], PIM-SSM [RFC4607]
   or PIM-BIDIR [RFC5015].  It may be said that these different modes of
   PIM tradeoff the advertisement timer causes optimality of the backup routers to
   assume that multicast forwarding tree for the master router is dead.  The virtual router then
   transitions into an unsteady
   amount of multicast forwarding state that must be maintained at
   routers.  SSM provides the most efficient forwarding between sources
   and an election process receivers and thus is
   initiated to select the next master router from the backup routers.
   This most suitable for applications with one-to-
   many traffic patterns.  State is fulfilled through built and maintained for each (S,G)
   flow.  Thus, the use amount of multicast packets.  Backup
   router(s) are only to send multicast packets during an election
   process.

   Overlays may use IP multicast to virtualize L2 multicasts.  IP
   multicast is used to reduce the scope of the L2-over-UDP flooding to
   only those hosts that have expressed explicit interest forwarding state held by routers
   in the
   frames.VXLAN, for instance, data center is an encapsulation scheme proportional to carry L2
   frames over L3 networks.  The VXLAN Tunnel End Point (VTEP)
   encapsulates frames inside an L3 tunnel.  VXLANs are identified by a
   24 bit VXLAN Network Identifier (VNI).  The VTEP maintains a table the number of
   known destination MAC addresses, sources and stores
   groups.  At the IP address other end of the
   tunnel to spectrum, BIDIR is the remote VTEP to use most
   efficient shared tree solution as one tree is built for each.  Unicast frames, all (S,G)s,
   therefore minimizing the amount of state.  This state reduction is at
   the expense of optimal forwarding path between
   VMs, are sent directly to sources and receivers.
   This use of a shared tree makes BIDIR particularly well-suited for
   applications with many-to-many traffic patterns, given that the unicast L3 address
   amount of the remote VTEP.
   Multicast frames are sent state is uncorrelated to a multicast IP group associated with the
   VNI.  Underlying IP Multicast protocols (PIM-SM/SSM/BIDIR) number of sources.  SSM and
   BIDIR are used
   to forward optimizations of PIM-SM.  PIM-SM is still the most widely
   deployed multicast data across routing protocol.  PIM-SM can also be the overlay.

   The Ganglia application most
   complex.  PIM-SM relies upon a RP (Rendezvous Point) to set up the
   multicast for distributed
   discovery tree and monitoring subsequently there is the option of computing systems such as clusters and
   grids.  It has been used switching to link clusters across university campuses
   and can scale
   the SPT (shortest path tree), similar to handle clusters with 2000 nodes
   Windows Server, cluster node exchange, relies upon SSM, or staying on the use of
   shared tree, similar to BIDIR.

3.2.  Layer 2 multicast heartbeats between servers.  Only the other interfaces in

   With IPv4 unicast address resolution, the same translation of an IP
   address to a MAC address is done dynamically by ARP.  With multicast group use
   address resolution, the data.  Unlike broadcast, mapping from a multicast
   traffic does not need IPv4 address to be flooded throughout the network, reducing a
   multicast MAC address is done by assigning the chance that unnecessary CPU cycles are expended filtering traffic
   on nodes outside low-order 23 bits of
   the cluster.  As multicast IPv4 address to fill the number low-order 23 bits of nodes increases, the
   ability to replace several unicast messages with a single
   multicast
   message improves node performance and decreases network bandwidth
   consumption.  Multicast messages replace unicast messages in two
   components of clustering:

   o  Heartbeats: The clustering failure detection engine MAC address.  Each IPv4 multicast address has 28 unique
   bits (the multicast address range is based on a
      scheme whereby nodes send heartbeat messages to other nodes.
      Specifically, for each network interface, a node sends 224.0.0.0/12) therefore mapping
   a heartbeat
      message to all other nodes with interfaces on that network.
      Heartbeat messages are sent every 1.2 seconds.  In the common case
      where each node has an interface on each cluster network, there
      are N * (N - 1) unicast heartbeats sent per network every 1.2
      seconds in an N-node cluster.  With multicast heartbeats, the
      message count drops IP address to N multicast heartbeats per network every
      1.2 seconds, because each node sends 1 message instead of N - 1.
      This represents a reduction in processing cycles on MAC address ignores 5 bits of the sending
      node and IP
   address.  Hence, groups of 32 multicast IP addresses are mapped to
   the same MAC address meaning a reduction in network bandwidth consumed.

   o  Regroup: The clustering membership engine executes a regroup
      protocol during multicast MAC address cannot be
   uniquely mapped to a membership view change.  The regroup protocol
      algorithm assumes the ability multicast IPv4 address.  Therefore, planning is
   required within an organization to broadcast messages choose IPv4 multicast addresses
   judiciously in order to all cluster
      nodes.  To avoid unnecessary network flooding and to properly
      authenticate messages, address aliasing.  When sending IPv6
   multicast packets on an Ethernet link, the broadcast primitive corresponding destination
   MAC address is implemented by a
      sequence direct mapping of unicast messages.  Converting the unicast messages last 32 bits of the 128 bit
   IPv6 multicast address into the 48 bit MAC address.  It is possible
   for more than one IPv6 multicast address to map to the same 48 bit
   MAC address.

   The default behaviour of many hosts (and, in fact, routers) is to
   block multicast traffic.  Consequently, when a single host wishes to join an
   IPv4 multicast message conserves processing power on group, it sends an IGMP [RFC2236], [RFC3376] report to
   the
      sending node router attached to the layer 2 segment and reduces network bandwidth consumption.

   Multicast addresses in also it instructs its
   data link layer to receive Ethernet frames that match the 224.0.0.x range are considered
   corresponding MAC address.  The data link local
   multicast addresses.  They are used for protocol discovery and are
   flooded layer filters the frames,
   passing those with matching destination addresses to every port.  For example, OSPF uses 224.0.0.5 and
   224.0.0.6 the IP module.
   Similarly, hosts simply hand the multicast packet for neighbor and DR discovery.  These addresses are
   reserved and will not be constrained by IGMP snooping.  These
   addresses are not transmission to be used by any application.

3.  L2 Multicast Protocols in
   the Data Center

   The switches, in between data link layer which would add the servers and layer 2 encapsulation, using
   the routers, rely upon igmp
   snooping to bound MAC address derived in the manner previously discussed.

   When this Ethernet frame with a multicast MAC address is received by
   a switch configured to forward multicast traffic, the ports leading to interested
   hosts and default
   behaviour is to L3 routers.  A switch will, by default, flood multicast
   traffic it to all the ports in the layer 2 segment.
   Clearly there may not be a broadcast domain (VLAN).  IGMP snooping
   is designed to prevent hosts on a local network from receiving
   traffic receiver for a this multicast group they have not explicitly joined.  It
   provides switches with a mechanism to prune multicast traffic from
   links that do not contain a multicast listener (an IGMP client). present
   on each port and IGMP snooping is a L2 optimization for L3 IGMP. used to avoid sending the frame out
   of ports without receivers.

   IGMP snooping, with proxy reporting or report suppression, actively
   filters IGMP packets in order to reduce load on the multicast router.
   Joins and leaves heading upstream to the router are filtered so that
   by ensuring only the minimal quantity of information is sent.  The
   switch is trying to ensure the router only has only a single entry for the
   group, regardless of how many the number of active listeners there are. listeners.  If there are
   two active listeners in a group and the first one leaves, then the
   switch determines that the router does not need this information
   since it does not affect the status of the group from the router's
   point of view.  However the next time there is a routine query from
   the router the switch will forward the reply from the remaining host,
   to prevent the router from believing there are no active listeners.
   It follows that in active IGMP snooping, the router will generally
   only know about the most recently joined member of the group.

   In order for IGMP, IGMP and thus IGMP snooping, snooping to function, a multicast
   router must exist on the network and generate IGMP queries.  The
   tables (holding the member ports for each multicast group) created
   for snooping are associated with the querier.  Without a querier the
   tables are not created and snooping will not work.  Furthermore  Furthermore, IGMP
   general queries must be unconditionally forwarded by all switches
   involved in IGMP snooping.  Some IGMP snooping implementations
   include full querier capability.  Others are able to proxy and
   retransmit queries from the multicast router.

   In source-only networks, however, which presumably describes most
   data center networks, there are no IGMP hosts

   Multicast Listener Discovery (MLD) [RFC 2710] [RFC 3810] is used by
   IPv6 routers for discovering multicast listeners on switch ports a directly
   attached link, performing a similar function to
   generate IGMP packets.  Switch ports in IPv4
   networks.  MLDv1 [RFC 2710] is similar to IGMPv2 and MLDv2 [RFC 3810]
   [RFC 4604] similar to IGMPv3.  However, in contrast to IGMP, MLD does
   not send its own distinct protocol messages.  Rather, MLD is a
   subprotocol of ICMPv6 [RFC 4443] and so MLD messages are connected a subset of
   ICMPv6 messages.  MLD snooping works similarly to multicast
   source ports IGMP snooping,
   described earlier.

3.3.  Example use cases

   A use case where PIM and IGMP are currently used in data centers is
   to support multicast router ports.  The switch typically learns
   about multicast groups from in VXLAN deployments.  In the original VXLAN
   specification [RFC7348], a data-driven flood and learn control plane
   was proposed, requiring the multicast data stream center IP fabric to support
   multicast routing.  A multicast group is associated with each virtual
   network, each uniquely identified by using a type
   of source only learning (when only receiving its VXLAN network identifiers
   (VNI).  VXLAN tunnel endpoints (VTEPs), typically located in the
   hypervisor or ToR switch, with local VMs that belong to this VNI
   would join the multicast data on group and use it for the
   port, no IGMP packets).  The switch forwards exchange of BUM
   traffic only to with the
   multicast router ports.  When other VTEPs.  Essentially, the switch receives VTEP would
   encapsulate any BUM traffic for new from attached VMs in an IP multicast groups, it will typically flood
   packet, whose destination address is the packets associated multicast group
   address, and transmit the packet to all ports the data center fabric.  Thus,
   PIM must be running in the same VLAN.  This unnecessary flooding fabric to maintain a multicast
   distribution tree per VNI.

   Alternatively, rather than setting up a multicast distribution tree
   per VNI, a tree can impact switch
   performance.

4.  L3 Multicast Protocols in be set up whenever hosts within the Data Center

   There are three flavors of VNI wish to
   exchange multicast traffic.  For example, whenever a VTEP receives an
   IGMP report from a locally connected host, it would translate this
   into a PIM used for Multicast Routing in the Data
   Center: PIM-SM [RFC4601], PIM-SSM [RFC4607], and PIM-BIDIR [RFC5015].
   SSM provides join message which will be propagated into the most efficient forwarding between sources and
   receivers and IP fabric.
   In order to ensure this join message is most suitable for one sent to many types the IP fabric rather
   than over the VXLAN interface (since the VTEP will have a route back
   to the source of the multicast
   applications.  State is built for each S,G channel therefore packet over the more
   sources VXLAN interface and groups there are, so
   would naturally attempt to send the join over this interface) a more state there is in the network.
   BIDIR is the most efficient shared tree solution as one tree is built
   for all S,G's, therefore saving state.  But it is not
   specific route back to the most
   efficient in forwarding path between sources and receivers.  SSM and
   BIDIR are optimizations of PIM-SM.  PIM-SM is still source over the most widely
   deployed multicast routing protocol.  PIM-SM can also IP fabric must be
   configured.  In this approach PIM must be configured on the most
   complex.  PIM-SM relies upon a RP (Rendezvous Point) to set up SVIs
   associated with the
   multicast tree VXLAN interface.

   Another use case of PIM and then will either switch IGMP in data centers is when IPTV servers
   use multicast to deliver content from the SPT (shortest path
   tree), similar data center to SSM, or stay on the shared tree (similar end users.
   IPTV is typically a one to BIDIR).
   For massive amounts of many application where the hosts sending (and receiving) multicast, are
   configured for IGMPv3, the
   shared tree (particularly switches are configured with PIM-BIDIR) provides IGMP
   snooping, and the best potential
   scaling since no matter how many routers are running PIM-SSM mode.  Often redundant
   servers send multicast sources exist within a
   VLAN, streams into the tree number stays network and the same.  IGMP snooping, IGMP proxy, network is
   forwards the data across diverse paths.

   Windows Media servers send multicast streams to clients.  Windows
   Media Services streams to an IP multicast address and
   PIM-BIDIR have all clients
   subscribe to the potential IP address to scale receive the same stream.  This allows
   a single stream to be played simultaneously by multiple clients and
   thus reducing bandwidth utilization.

3.4.  Advantages and disadvantages

   Arguably the huge scaling numbers
   required biggest advantage of using PIM and IGMP to support one-
   to-many communication in a data center.

5.  Challenges of centers is that these protocols are
   relatively mature.  Consequently, PIM is available in most routers
   and IGMP is supported by most hosts and routers.  As such, no
   specialized hardware or relatively immature software is involved in
   using multicast them in data centers.  Furthermore, the Data Center

   Data Center environments may create unique challenges for IP
   Multicast.  Data Center maturity of these
   protocols means their behaviour and performance in operational
   networks required a high amount is well-understood, with widely available best-practices and
   deployment guides for optimizing their performance.

   However, somewhat ironically, the relative disadvantages of VM traffic PIM and mobility within
   IGMP usage in data centers also stem mostly from their maturity.
   Specifically, these protocols were standardized and between DC networks.  DC networks have large
   numbers implemented long
   before the highly-virtualized multi-tenant data centers of servers.  DC networks today
   existed.  Consequently, PIM and IGMP are often used neither optimally placed to
   deal with cloud
   orchestration software.  DC networks often use IP Multicast in their
   unique environments.  This section looks at the challenges requirements of using one-to-many communication in modern
   data centers nor to exploit characteristics and idiosyncrasies of
   data centers.  For example, there may be thousands of VMs
   participating in a multicast session, with some of these VMs
   migrating to servers within the challenging data center environment.

   When IGMP/MLD Snooping is not implemented, ethernet switches will
   flood multicast frames out of center, new VMs being
   continually spun up and wishing to join the sessions while all switch-ports, which turns the
   traffic into something more like
   time other VMs are leaving.  In such a broadcast.

   VRRP uses multicast heartbeat to communicate between routers.  The
   communication between scenario, the host churn in the PIM
   and IGMP state machines, the default gateway is unicast.
   The multicast heartbeat can be very chatty when there are thousands volume of VRRP pairs with sub-second heartbeat calls back control messages they would
   generate and forth.

   Link-local multicast should scale well within one IP subnet
   particularly with a large layer3 domain extending down to the access
   or aggregation switches.  But amount of state they would necessitate within
   routers, especially if multicast traverses beyond one IP
   subnet, which is necessary they were deployed naively, would be
   untenable.

4.  Alternative options for an overlay like VXLAN, you could
   potentially have scaling concerns.  If using a VXLAN overlay, it handling one-to-many traffic

   Section 2 has shown that there is
   necessary likely to map the L2 multicast be an increasing amount
   one-to-many communications in the overlay to L3 data centers.  And Section 3 has
   discussed how conventional multicast may be used to handle this
   traffic.  Having said that, there are a number of alternative options
   of handling this traffic pattern in data centers, as discussed in the underlay
   subsequent section.  It should be noted that many of these techniques
   are not mutually-exclusive; in fact many deployments involve a
   combination of more than one of these techniques.  Furthermore, as
   will be shown, introducing a centralized controller or do head end replication a distributed
   control plane, makes these techniques more potent.

4.1.  Minimizing traffic volumes

   If handling one-to-many traffic in data centers can be challenging
   then arguably the overlay and receive
   duplicate frames on the first link from the router to the core
   switch.  The most intuitive solution could be is to run potentially thousands of PIM
   messages aim to generate/maintain minimize the required multicast state
   volume of such traffic.

   It was previously mentioned in Section 2 that the IP
   underlay.  The behavior three main causes
   of one-to-many traffic in data centers are applications, overlays and
   protocols.  While, relatively speaking, little can be done about the upper layer, with respect
   volume of one-to-many traffic generated by applications, there is
   more scope for attempting to
   broadcast/multicast, affects reduce the choice volume of head end (*,G) or (S,G)
   replication in the underlay, which affects the opex such traffic
   generated by overlays and protocols.  (And often by protocols within
   overlays.)  This reduction is possible by exploiting certain
   characteristics of data center networks: fixed and capex regular topology,
   owned and exclusively controlled by single organization, well-known
   overlay encapsulation endpoints etc.

   A way of minimizing the
   entire solution.  A VXLAN, with thousands amount of logical groups, maps one-to-many traffic that traverses
   the data center fabric is to
   head end replication in use a centralized controller.  For
   example, whenever a new VM is instantiated, the hypervisor or
   encapsulation endpoint can notify a centralized controller of this
   new MAC address, the associated virtual network, IP address etc.  The
   controller could subsequently distribute this information to IGMP every
   encapsulation endpoint.  Consequently, when any endpoint receives an
   ARP request from a locally attached VM, it could simply consult its
   local copy of the hypervisor
   and then PIM between information distributed by the TOR and CORE 'switches' controller and
   reply.  Thus, the gateway
   router.

   Requiring IP multicast (especially PIM BIDIR) from ARP request is suppressed and does not result in
   one-to-many traffic traversing the network can
   prove challenging for data center operators especially at the kind of
   scale that IP fabric.

   Alternatively, the VXLAN/NVGRE proposals require.  This is also true when functionality supported by the L2 topological domain controller can
   realized by a distributed control plane.  BGP-EVPN [RFC7432, RFC8365]
   is large and extended all the way to the L3
   core.  In most popular control plane used in data centers centers.  Typically,
   the encapsulation endpoints will exchange pertinent information with highly virtualized servers, even small L2
   domains may spread across many server racks (i.e. multiple switches
   and router ports).

   It's not uncommon for there
   each other by all peering with a BGP route reflector (RR).  Thus,
   information about local MAC addresses, MAC to IP address mapping,
   virtual networks identifiers etc can be 10-20 disseminated.  Consequently,
   ARP requests from local VMs per server in a
   virtualized environment.  One vendor reported a customer requesting a
   scale to 400VM's per server.  For multicast to can be a viable solution suppressed by the encapsulation
   endpoint.

4.2.  Head end replication

   A popular option for handling one-to-many traffic patterns in this environment, data
   centers is head end replication (HER).  HER means the network needs to be able to scale traffic is
   duplicated and sent to these
   numbers when these VMs are sending/receiving multicast.

   A lot of switching/routing hardware has problems with each end point individually using conventional
   IP Multicast,
   particularly with regards to hardware support of PIM-BIDIR.

   Sending L2 multicast over a campus or data center backbone, in any
   sort unicast.  Obvious disadvantages of significant way, HER include traffic duplication
   and the additional processing burden on the head end.  Nevertheless,
   HER is a new challenge enabled for especially attractive when overlays are in use as the first
   time
   replication can be carried out by overlays.  There the hypervisor or encapsulation end
   point.  Consequently, the VMs and IP fabric are interesting challenges when pushing
   large amounts of multicast traffic through a network, unmodified and have thus
   far been dealt with using purpose-built networks.  While
   unaware of how the overlay
   proposals have been careful not traffic is delivered to impose new protocol requirements,
   they have not addressed the issues multiple end points.
   Additionally, it is possible to use a number of performance approaches for
   constructing and scalability,
   nor disseminating the large-scale availability list of these protocols.

   There is an unnecessary multicast stream flooding problem in the link
   layer switches between the multicast source which endpoints should
   receive what traffic and so on.

   For example, the PIM First Hop
   Router (FHR).  The IGMP-Snooping Switch will forward multicast
   streams reluctance of data center operators to router ports, enable PIM
   and IGMP within the PIM FHR must receive all multicast
   streams even if there data center fabric means VXLAN is no request from receiver.  This often leads used with
   HER.  Thus, BUM traffic from each VNI is replicated and sent using
   unicast to waste remote VTEPs with VMs in that VNI.  The list of switch cache and link bandwidth when remote
   VTEPs to which the multicast
   streams are not actually required.  [I-D.pim-umf-problem-statement]
   details traffic should be sent may be configured manually
   on the problem and defines design goals VTEP.  Alternatively, the VTEPs may transmit appropriate state
   to a centralized controller which in turn sends each VTEP the list of
   remote VTEPs for each VNI.  Lastly, HER also works well when a generic mechanism
   distributed control plane is used instead of the centralized
   controller.  Again, BGP-EVPN may be used to restrain distribute the unnecessary multicast stream flooding.

6.  Layer 3 / Layer 2 Topological Variations
   information needed to faciliate HER to the VTEPs.

4.3.  BIER

   As discussed in RFC6820, the ARMD problems statement, there are a
   variety of topological Section 3.4, PIM and IGMP face potential scalability
   challenges when deployed in data center variations including L3 centers.  These challenges are
   typically due to Access
   Switches, L3 the requirement to Aggregation Switches, build and maintain a distribution
   tree and L3 in the Core only.
   Further analysis is needed in order requirement to understand how hold per-flow state in routers.  Bit
   Index Explicit Replication (BIER) [RFC 8279] is a new multicast
   forwarding paradigm that avoids these
   variations affect IP Multicast scalability

7.  Address Resolution

7.1.  Solicited-node Multicast Addresses for IPv6 address resolution

   Solicited-node Multicast Addresses are used with IPv6 Neighbor
   Discovery to provide two requirements.

   When a multicast packet enters a BIER domain, the same function ingress router,
   known as the Address Resolution
   Protocol (ARP) Bit-Forwarding Ingress Router (BFIR), adds a BIER header
   to the packet.  This header contains a bit string in IPv4.  ARP uses broadcasts, which each bit
   maps to send an ARP
   Requests, which are received by all end hosts on egress router, known as Bit-Forwarding Egress Router
   (BFER).  If a bit is set, then the local link.
   Only packet should be forwarded to the host being queried responds.  However,
   associated BFER.  The routers within the other hosts still
   have to process BIER domain, Bit-Forwarding
   Routers (BFRs), use the BIER header in the packet and discard information in
   the request.  With IPv6, a host Bit Index Forwarding Table (BIFT) to carry out simple bit- wise
   operations to determine how the packet should be replicated optimally
   so it reaches all the appropriate BFERs.

   BIER is
   required deemed to join a Solicited-Node multicast group be attractive for each of its
   configured unicast or anycast addresses.  Because a Solicited-node
   Multicast Address facilitating one-to-many
   communications in data ceneters [I-D.ietf-bier-use-cases].  The
   deployment envisioned with overlay networks is a function of that the last 24-bits of an IPv6
   unicast or anycast address, the number of hosts that are subscribed
   to each Solicited-node Multicast Address
   encapsulation endpoints would typically be one
   (there could be more because the mapping function is BFIR.  So knowledge about the
   actual multicast groups does not a 1:1
   mapping).  Compared to ARP reside in IPv4, a host should not need to be
   interrupted as often to service Neighbor Solicitation requests.

7.2.  Direct Mapping for Multicast address resolution

   With IPv4 unicast address resolution, the translation of an IP
   address data center fabric,
   improving the scalability compared to conventional IP multicast.
   Additionally, a MAC address is done dynamically by ARP.  With multicast
   address resolution, the mapping from centralized controller or a multicast IP address BGP-EVPN control plane
   may be used with BIER to a
   multicast MAC address is derived from direct mapping.  In IPv4, ensure the
   mapping is done by assigning BFIR have the low-order 23 bits required
   information.  A challenge associated with using BIER is that, unlike
   most of the multicast
   IP address other approaches discussed in this draft, it requires
   changes to fill the low-order 23 bits forwarding behaviour of the multicast MAC
   address.  When routers used in the data
   center IP fabric.

4.4.  Segment Routing

   Segment Routing (SR) [I-D.ietf-spring-segment-routing] adopts the the
   source routing paradigm in which the manner in which a host joins packet
   traverses a network is determined by an IP multicast group, it instructs ordered list of instructions.
   These instructions are known as segments may have a local semantic to
   an SR node or global within an SR domain.  SR allows enforcing a flow
   through any topological path while maintaining per-flow state only at
   the
   data link layer ingress node to receive frames that match the MAC address that
   corresponds SR domain.  Segment Routing can be applied to
   the IP address of MPLS and IPv6 data-planes.  In the multicast group.  The data link
   layer filters former, the frames and passes frames with matching destination
   addresses to list of segments
   is represented by the IP module.  Since label stack and in the mapping from multicast IP
   address to latter it is represented
   as a MAC address ignores 5 bits of routing extension header.  Use-cases are described in [I-D.ietf-
   spring-segment-routing] and are being considered in the IP address, groups context of
   32 multicast IP addresses are mapped
   BGP-based large-scale data-center (DC) design [RFC7938].

   Multicast in SR continues to the same MAC address.  As a
   result a multicast MAC address cannot be uniquely mapped discussed in a variety of drafts and
   working groups.  The SPRING WG has not yet been chartered to work on
   Multicast in SR.  Multicast can include locally allocating a
   multicast IPv4 address.  Planning is required within an organization Segment
   Identifier (SID) to select IPv4 groups that are far enough away from each other existing replication solutions, such as PIM,
   mLDP, P2MP RSVP-TE and BIER.  It may also be that a new way to
   not end up with the same L2 address used.  Any multicast address signal
   and install trees in SR is developed without creating state in the [224-239].0.0.x
   network.

5.  Conclusions

   As the volume and [224-239].128.0.x ranges should not be
   considered.  When sending IPv6 importance of one-to-many traffic in data centers
   increases, conventional IP multicast packets on an Ethernet link,
   the corresponding destination MAC address is likely to become increasingly
   unattractive for deployment in data centers for a direct mapping number of the
   last 32 bits reasons,
   mostly pertaining its inherent relatively poor scalability and
   inability to exploit characteristics of data center network
   architectures.  Hence, even though IGMP/MLD is likely to remain the 128 bit IPv6
   most popular manner in which end hosts signal interest in joining a
   multicast address into the 48 bit
   MAC address.  It group, it is possible for more than one IPv6 Multicast address
   to map to unlikely that this multicast traffic will be
   transported over the same 48 bit MAC address.

8. data center IP fabric using a multicast
   distribution tree built by PIM.  Rather, approaches which exploit
   characteristics of data center network architectures (e.g. fixed and
   regular topology, owned and exclusively controlled by single
   organization, well-known overlay encapsulation endpoints etc.) are
   better placed to deliver one-to-many traffic in data centers,
   especially when judiciously combined with a centralized controller
   and/or a distributed control plane (particularly one based on BGP-
   EVPN).

6.  IANA Considerations

   This memo includes no request to IANA.

9.

7.  Security Considerations

   No new security considerations result from this document

10.

8.  Acknowledgements

   The authors would like to thank the many individuals who contributed
   opinions on the ARMD wg mailing list about this topic: Linda Dunbar,
   Anoop Ghanwani, Peter Ashwoodsmith, David Allan, Aldrin Isaac, Igor
   Gashinsky, Michael Smith, Patrick Frejborg, Joel Jaeggli and Thomas
   Narten.

11.

9.  References

11.1.

9.1.  Normative References

   [RFC2119]  Bradner, S., "Key words for use in RFCs to Indicate
              Requirement Levels", BCP 14, RFC 2119,
              DOI 10.17487/RFC2119, March 1997,
              <https://www.rfc-editor.org/info/rfc2119>.

11.2.

9.2.  Informative References

   [I-D.ietf-bier-use-cases]
              Kumar, N., Asati, R., Chen, M., Xu, X., Dolganow, A.,
              Przygienda, T., Gulko, A., Robinson, D., Arya, V., and C.
              Bestler, "BIER Use Cases", draft-ietf-bier-use-cases-06
              (work in progress), January 2018.

   [I-D.ietf-nvo3-geneve]
              Gross, J., Ganga, I., and T. Sridhar, "Geneve: Generic
              Network Virtualization Encapsulation", draft-ietf-
              nvo3-geneve-06 (work in progress), March 2018.

   [I-D.ietf-nvo3-vxlan-gpe]
              Maino, F., Kreeger, L., and U. Elzur, "Generic Protocol
              Extension for VXLAN", draft-ietf-nvo3-vxlan-gpe-06 (work
              in progress), April 2018.

   [I-D.ietf-spring-segment-routing]
              Filsfils, C., Previdi, S., Ginsberg, L., Decraene, B.,
              Litkowski, S., and R. Shakir, "Segment Routing
              Architecture", draft-ietf-spring-segment-routing-15 (work
              in progress), January 2018.

   [RFC2236]  Fenner, W., "Internet Group Management Protocol, Version
              2", RFC 2236, DOI 10.17487/RFC2236, November 1997,
              <https://www.rfc-editor.org/info/rfc2236>.

   [RFC2710]  Deering, S., Fenner, W., and B. Haberman, "Multicast
              Listener Discovery (MLD) for IPv6", RFC 2710,
              DOI 10.17487/RFC2710, October 1999,
              <https://www.rfc-editor.org/info/rfc2710>.

   [RFC3376]  Cain, B., Deering, S., Kouvelas, I., Fenner, B., and A.
              Thyagarajan, "Internet Group Management Protocol, Version
              3", RFC 3376, DOI 10.17487/RFC3376, October 2002,
              <https://www.rfc-editor.org/info/rfc3376>.

   [RFC4601]  Fenner, B., Handley, M., Holbrook, H., and I. Kouvelas,
              "Protocol Independent Multicast - Sparse Mode (PIM-SM):
              Protocol Specification (Revised)", RFC 4601,
              DOI 10.17487/RFC4601, August 2006,
              <https://www.rfc-editor.org/info/rfc4601>.

   [RFC4607]  Holbrook, H. and B. Cain, "Source-Specific Multicast for
              IP", RFC 4607, DOI 10.17487/RFC4607, August 2006,
              <https://www.rfc-editor.org/info/rfc4607>.

   [RFC5015]  Handley, M., Kouvelas, I., Speakman, T., and L. Vicisano,
              "Bidirectional Protocol Independent Multicast (BIDIR-
              PIM)", RFC 5015, DOI 10.17487/RFC5015, October 2007,
              <https://www.rfc-editor.org/info/rfc5015>.

   [RFC6820]  Narten, T., Karir, M., and I. Foo, "Address Resolution
              Problems in Large Data Center Networks", RFC 6820,
              DOI 10.17487/RFC6820, January 2013,
              <https://www.rfc-editor.org/info/rfc6820>.

Author's Address

   [RFC7348]  Mahalingam, M., Dutt, D., Duda, K., Agarwal, P., Kreeger,
              L., Sridhar, T., Bursell, M., and C. Wright, "Virtual
              eXtensible Local Area Network (VXLAN): A Framework for
              Overlaying Virtualized Layer 2 Networks over Layer 3
              Networks", RFC 7348, DOI 10.17487/RFC7348, August 2014,
              <https://www.rfc-editor.org/info/rfc7348>.

   [RFC7432]  Sajassi, A., Ed., Aggarwal, R., Bitar, N., Isaac, A.,
              Uttaro, J., Drake, J., and W. Henderickx, "BGP MPLS-Based
              Ethernet VPN", RFC 7432, DOI 10.17487/RFC7432, February
              2015, <https://www.rfc-editor.org/info/rfc7432>.

   [RFC7637]  Garg, P., Ed. and Y. Wang, Ed., "NVGRE: Network
              Virtualization Using Generic Routing Encapsulation",
              RFC 7637, DOI 10.17487/RFC7637, September 2015,
              <https://www.rfc-editor.org/info/rfc7637>.

   [RFC7938]  Lapukhov, P., Premji, A., and J. Mitchell, Ed., "Use of
              BGP for Routing in Large-Scale Data Centers", RFC 7938,
              DOI 10.17487/RFC7938, August 2016,
              <https://www.rfc-editor.org/info/rfc7938>.

   [RFC8014]  Black, D., Hudson, J., Kreeger, L., Lasserre, M., and T.
              Narten, "An Architecture for Data-Center Network
              Virtualization over Layer 3 (NVO3)", RFC 8014,
              DOI 10.17487/RFC8014, December 2016,
              <https://www.rfc-editor.org/info/rfc8014>.

   [RFC8279]  Wijnands, IJ., Ed., Rosen, E., Ed., Dolganow, A.,
              Przygienda, T., and S. Aldrin, "Multicast Using Bit Index
              Explicit Replication (BIER)", RFC 8279,
              DOI 10.17487/RFC8279, November 2017,
              <https://www.rfc-editor.org/info/rfc8279>.

   [RFC8365]  Sajassi, A., Ed., Drake, J., Ed., Bitar, N., Shekhar, R.,
              Uttaro, J., and W. Henderickx, "A Network Virtualization
              Overlay Solution Using Ethernet VPN (EVPN)", RFC 8365,
              DOI 10.17487/RFC8365, March 2018,
              <https://www.rfc-editor.org/info/rfc8365>.

Authors' Addresses

   Mike McBride
   Huawei

   Email: michael.mcbride@huawei.com

   Olufemi Komolafe
   Arista Networks

   Email: femi@arista.com