INTERNET-DRAFT                                                C. Semeria                                                 T. Maufer
                                                              C. Semeria
Category: Informational                                 3Com Corporation
                                                            January
                                                              March 1997

               Introduction to IP Multicast Routing

            <draft-ietf-mboned-intro-multicast-00.txt>

            <draft-ietf-mboned-intro-multicast-01.txt>

Status of this Memo

This document is an Internet Draft.  Internet Drafts are working
documents of the Internet Engineering Task Force (IETF), its Areas, and
its Working Groups.  Note that other groups may also distribute working
documents as Internet Drafts.

Internet Drafts are draft documents valid for a maximum of six months.
Internet Drafts may be updated, replaced, or obsoleted by other
documents at any time.  It is not appropriate to use Internet Drafts as
reference material or to cite them other than as a "working draft" or
"work in progress."

To learn the current status of any Internet-Draft, please check the
"1id-abstracts.txt" listing contained in the internet-drafts Shadow
Directories on:

    ftp.is.co.za                (Africa)
    nic.nordu.net               (Europe)
    ds.internic.net      (US East Coast)
    ftp.isi.edu          (US West Coast)
    munnari.oz.au          (Pacific Rim)

FOREWORD

This document is introductory in nature.  We have not attempted to
describe every detail of each protocol, rather to give a concise
overview in all cases, with enough specifics to allow a reader to grasp
the essential details and operation of protocols related to multicast
IP.  Every effort has been made to ensure the accurate representation of
any cited works, especially any works-in-pro- gress. works-in-progress.  For the complete
details, we refer you to the relevant specification(s).

If internet-drafts are cited in this document, it is only because they
are the only sources of certain technical information at the time of
this writing.  We expect that many of the internet-drafts which we have
cited will eventually become RFCs.  See the shadow directories on the
previous page above for
the status of any of these drafts, their follow-on drafts, or possibly
the resulting RFCs.

ABSTRACT

The first part of this paper describes the benefits of multicasting,
the MBone, Class D addressing, and the operation of the Internet Group
Management Protocol (IGMP).  The second section explores a number of
different techniques that may potentially be employed by multicast
routing protocols:

    o  Flooding
    o  Spanning Trees
    o  Reverse Path Broadcasting (RPB)
    o  Truncated Reverse Path Broadcasting (TRPB)
    o  Reverse Path Multicasting (RPM)
    o  "Shared-Tree" Techniques

The third part contains the main body of the paper.  It describes how
the previous techniques are implemented in multicast routing protocols
available today (or under development).

    o  Distance Vector Multicast Routing Protocol (DVMRP)
    o  Multicast Extensions to OSPF (MOSPF)
    o  Protocol-Independent Multicast (PIM) - Dense Mode (PIM-DM)
    o  Protocol-Independent Multicast - Sparse Mode (PIM-SM)
    o  Core-Based Trees (CBT)

                          Table of Contents
Section

1  . . . . . . . . . . . . . . . . . . . . . . . . . . . .  INTRODUCTION
1.1  . . . . . . . . . . . . . . . . . . . . . . . . .  Multicast Groups
1.2  . . . . . . . . . . . . . . . . . . . . . Group Membership Protocol
1.3  . . . . . . . . . . . . . . . . . . . . Multicast Routing Protocols
1.3.1  . . . . . . . . . . .  Multicast Routing vs. Multicast Forwarding
2  . . . . . . . .  MULTICAST SUPPORT FOR EMERGING INTERNET APPLICATIONS
2.1  . . . . . . . . . . . . . . . . . . . . . . . Reducing Network Load
2.2  . . . . . . . . . . . . . . . . . . . . . . . .  Resource Discovery
2.3  . . . . . . . . . . . . . . .  Support for Datacasting Applications
3  . . . . . . . . . . . . . . THE INTERNET'S MULTICAST BACKBONE (MBone)
4  . . . . . . . . . . . . . . . . . . . . .  . . . MULTICAST ADDRESSING
4.1  . . . . . . . . . . . . . . . . . . . . . .  . .  Class D Addresses
4.2  . . . . . . .  Mapping a Class D Address to an IEEE-802 MAC Address
4.3  . . . . . . . . .  Transmission and Delivery of Multicast Datagrams
5  . . . . . . . . . . . . . . INTERNET GROUP MANAGEMENT PROTOCOL (IGMP)
5.1  . . . . . . . . . . . . . . . . . . . . . . . . . .  IGMP Version 1
5.2  . . . . . . . . . . . . . . . . . . . . . . . . . .  IGMP Version 2
5.3  . . . . . . . . . . . . . . . . . . . . . . . . . .  IGMP Version 3
6  . . . . . . . . . . . . . . . . . . . MULTICAST FORWARDING TECHNIQUES
6.1  . . . . . . . . . . . . . . . . . . . . . "Simpleminded" Techniques
6.1.1  . . . . . . . . . . . . . . . . . . . . . . . . . . . .  Flooding
6.1.2  . . . . . . . . . . . . . . . . . . . . . . . . . . Spanning Tree
6.2  . . . . . . . . . . . . . . . . . . .  Source-Based Tree Techniques

6.2.1  . . . . . . . . . . . . . . . . . Reverse Path Broadcasting (RPB)
6.2.1.1  . . . . . . . . . . . . .  Reverse Path Broadcasting: Operation
6.2.1.2.
6.2.1.2  . . . . . . . . . . . . . . . . . RPB: Benefits and Limitations
6.2.2  . . . . . . . . . . .  Truncated Reverse Path Broadcasting (TRPB)
6.2.3  . . . . . . . . . . . . . . . . . Reverse Path Multicasting (RPM)
6.2.3.1  . . . . . . . . . . . . . . . . . . . . . . . . . . . Operation
6.2.3.2  . . . . . . . . . . . . . . . . . . . . . . . . . . Limitations
6.3  . . . . . . . . . . . . . . . . . . . . . .  Shared Tree Techniques
6.3.1  . . . . . . . . . . . . . . . . . . . . . . . . . . . . Operation
6.3.2  . . . . . . . . . . . . . . . . . . . . . . . . . . . .  Benefits
6.3.3  . . . . . . . . . . . . . . . . . . . . . . . . . . . Limitations
7  . . . . . . . . .  SOURCE-BASED TREE ("DENSE MODE") . . . . . . . . . .  "DENSE MODE" ROUTING PROTOCOLS
7.1  . . . . . . . .  Distance Vector Multicast Routing Protocol (DVMRP)
7.1.1  . . . . . . . . . . . . . . . . .  Physical and Tunnel Interfaces
7.1.2  . . . . . . . . . . . . . . . . . . . . . . . . . Basic Operation
7.1.3  . . . . . . . . . . . . . . . . . . . . .  DVMRP Router Functions
7.1.4  . . . . . . . . . . . . . . . . . . . . . . . DVMRP Routing Table
7.1.5  . . . . . . . . . . . . . . . . . . . . .  DVMRP Forwarding Table
7.1.6  . . . . . . . . . . . . . . . . . Hierarchical DVMRP (DVMRP v4.0)
7.1.6.1  . . . . . . . . . .  Benefits of Hierarchical Multicast Routing
7.1.6.2  . . . . . . . . . . . . . . . . . . . Hierarchical Architecture
7.2  . . . . . . . . . . . . . . .  Multicast Extensions to OSPF (MOSPF)
7.2.1  . . . . . . . . . . . . . . . . . . Intra-Area Routing with MOSPF
7.2.1.1  . . . . . . . . . . . . . . . . . . . . .  Local Group Database
7.2.1.2  . . . . . . . . . . . . . . . . . Datagram's Shortest Path Tree
7.2.1.3  . . . . . . . . . . . . . . . . . . . . . . .  Forwarding Cache
7.2.2  . . . . . . . . . . . . . . . . . . Mixing MOSPF and OSPF Routers
7.2.3  . . . . . . . . . . . . . . . . . . Inter-Area Routing with MOSPF
7.2.3.1  . . . . . . . . . . . . . . . . Inter-Area Multicast Forwarders
7.2.3.2  . . . . . . . . . . .  Inter-Area Datagram's Shortest Path Tree
7.2.4  . . . . . . . . . Inter-Autonomous System Multicasting with MOSPF
7.3  . . . . . . . . . . . . . . .  Protocol-Independent Multicast (PIM)
7.3.1  . . . . . . . . . . . . . . . . . . . . PIM - Dense Mode (PIM-DM)
8  . . . . . . . . . . . . SHARED TREE ("SPARSE MODE") . . . . . . . "SPARSE MODE" ROUTING PROTOCOLS
8.1  . . . . . . . Protocol-Independent Multicast - Sparse Mode (PIM-SM)
8.1.1  . . . . . . . . . . . . . .  Directly Attached Host Joins a Group
8.1.2  . . . . . . . . . . . . Directly Attached Source Sends to a Group
8.1.3  . . . . . . .  Shared Tree (RP-Tree) or Shortest Path Tree (SPT)?
8.1.4  . . . . . . . . . . . . . . . . . . . . . . .   Unresolved Issues
8.2  . . . . . . . . . . . . . . . . . . . . . .  Core-Based  Core Based Trees (CBT)
8.2.1  . . . . . . . . . . . . . . . . . . Joining a Group's Shared Tree
8.2.2  . . . . . . . . . . . . . . . . . . . Primary and Secondary Cores . .  Data Packet Forwarding
8.2.3  . . . . . . . . . . . . . . . . . . . . .  Data Packet Forwarding . .  Non-Member Sending
8.2.4  . . . . . . . . . . . . . . . . .  CBT Multicast Interoperability
9   . . . . . .  Non-Member Sending
8.2.5  INTEROPERABILITY FRAMEWORK FOR MULTICAST BORDER ROUTERS
9.1  . . . . . . . . . . . . . Requirements for Multicast Border Routers
10  . . . . . Emulating Shortest-Path Trees
8.2.6 . . . . . . . . . . . . . . . . .  CBT Multicast Interoperability
9 . . . . . . . REFERENCES
10.1  . . . . . . . . . . . . . . . . . . . Requests for Comments (RFCs)
10.2  . . . REFERENCES
9.1 . . . . . . . . . . . . . . . . . . . . Requests for Comments (RFCs)
9.2 . .  Internet-Drafts
10.3  . . . . . . . . . . . . . . . . . . . . . . . .  Internet Drafts
9.3 . . . .  Textbooks
10.4  . . . . . . . . . . . . . . . . . . . . . . . . .  Textbooks
9.4 . . . . .  Other
11  . . . . . . . . . . . . . . . . . . . . . .  SECURITY CONSIDERATIONS
12  . . . .  Other
10 . . . . . . . . . . . . . . . . . . . . . .  SECURITY CONSIDERATIONS
11 ACKNOWLEDGEMENTS
13  . . . . . . . . . . . . . . . . . . . . . . . . . AUTHORS' ADDRESSES

1. Introduction INTRODUCTION

There are three fundamental types of IPv4 addresses:  unicast,
broadcast, and multicast.  A unicast address is used to transmit a
packet to a single destination.  A broadcast address is used to send a
datagram to an entire subnetwork.  A multicast address is designed to
enable the delivery of datagrams to a set of hosts that have been
configured as members of a multicast group across various
subnetworks.

Multicasting is not connection-oriented.  A multicast datagram is
delivered to destination group members with the same "best-effort"
reliability as a standard unicast IP datagram.  This means that
multicast datagrams are not guaranteed to reach all members of a group,
nor to arrive in the same order in which they were transmitted.

The only difference between a multicast IP packet and a unicast IP
packet is the presence of a 'group address' in the Destination Address
field of the IP header.  Instead of a Class A, B, or C IP destination
address, multicasting employs a Class D address format, which ranges
from 224.0.0.0 to 239.255.255.255.

1.1 Multicast Groups

Individual hosts are free to join or leave a multicast group at any
time.  There are no restrictions on the physical location or the number
of members in a multicast group.  A host may be a member of more than
one multicast group at any given time and does not have to belong to a
group to send packets to members of a group.

1.2 Group Membership Protocol

A group membership protocol is employed by routers to learn about the
presence of group members on their directly attached subnetworks.  When
a host joins a multicast group, it transmits a group membership protocol
message for the group(s) that it wishes to receive, and sets its IP
process and network interface card to receive frames addressed to the
multicast group.  This receiver-initiated join process has excellent
scaling properties since, as the multicast group increases in size, it
becomes ever more likely that a new group member will be able to locate
a nearby branch of the multicast delivery tree.

[This space was intentionally left blank.]

========================================================================
                            _    _    _    _
                           |_|  |_|  |_|  |_|
                           '-'  '-'  '-'  '-'
                            |    |    |    |
                          <- - - - - - - - - ->
                                   |
                                   |
                                   v
                                Router
                                   ^
                                /     \
         _  ^                 +         +              ^  _
        |_|-|               /            \             |-|_|
        '_' |             +                +           | '_'
         _  |          v                     v         |  _
        |_|-|- - >|Router| <- + - + - + -> |Router|<- -|-|_|
        '_' |                                          | '_'
         _  |                                          |  _
        |_|-|                                          |-|_|
        '_' |                                          | '_'
            v                                          v

LEGEND

<- - - -> Group Membership Protocol
<-+-+-+-> Multicast Routing Protocol

Figure 1: Multicast IP Delivery Service
=======================================================================

1.3 Multicast Routing Protocols

Multicast routers execute a multicast routing protocol to define
delivery paths that enable the forwarding of multicast datagrams
across an internetwork.

1.3.1  Multicast Routing vs. Multicast Forwarding

Multicast routing protocols supply the necessary data to enable establish or help establish the distribution
tree for a given group, which enables multicast forwarding of multicast packets. packets
addressed to the group.  In the case of unicast routing, unicast, routing protocols are
also used to build a forwarding table (commonly called a routing table).
Unicast destinations are entered in the routing table, and associated
with a metric and a next-hop router toward the destination.  Multicast routing protocols are usually unicast routing
protocols that facilitate the determination of routes toward a source,
not a destination.  Multicast routing protocols are also used to build
a forwarding table.  The key
difference between unicast forwarding and multicast forwarding is that
multicast packets must be forwarded away from a their source.  If a packet
is ever goes forwarded back toward the its source, a forwarding loop could be have
formed, possibly leading to a multicast "storm."

A common misconception is that multicast

Each routing protocols pass around
information about groups, represented by class D addresses. In fact, as
long as protocol constructs a router can determine what direction forwarding table in its own way; the source is (relative to
itself) and where all the downstream receivers are, then it can build
a forwarding table.  The
forwarding table tells the each router that for a certain source, or for a
given source sending to a certain group (or in other words, for (called a
certain (source, group) pair), the
packets must all are expected to arrive on a certain "inbound" or "upstream"
interface and must be copied to certain (set of) "outbound" or
"downstream" interface(s). interface(s) in order to reach all known subnetworks with
group members.

2. MULTICAST SUPPORT FOR EMERGING INTERNET APPLICATIONS

Today, the majority of Internet applications rely on point-to-point
transmission.  The utilization of point-to-multipoint transmission has
traditionally been limited to local area network applications.  Over the
past few years the Internet has seen a rise in the number of new
applications that rely on multicast transmission.  Multicast IP
conserves bandwidth by forcing the network to do packet replication only
when necessary, and offers an attractive alternative to unicast
transmission for the delivery of network ticker tapes, live stock
quotes, multiparty videoconferencing, and shared whiteboard applications
(among others). It is important to note that the applications for IP
Multicast are not solely limited to the Internet.  Multicast IP can also
play an important role in large commercial internetworks.

[This space was intentionally left blank.]

2.1 Reducing Network Load

Assume that a stock ticker application is required to transmit packets
to 100 stations within an organization's network.  Unicast transmission
to this set of stations will require the periodic transmission of 100
packets where many packets may in fact be traversing the same link(s).
Multicast transmission is the ideal solution for this type of
application since it requires only a single packet stream to be
transmitted by the source which is replicated at forks in the multicast
delivery tree.

Broadcast transmission is not an effective solution for this type of
application since it affects the CPU performance of each and every
station that sees the packet.  Besides, it wastes bandwidth.

2.2 Resource Discovery

Some applications implement utilize multicast group addresses instead of
broadcasts broadcast transmission
to transmit packets to group members residing on the same
network. subnetwork.
However, there is no reason to limit the extent of a multicast
transmission to a single LAN.  The time-to-live (TTL) field in the IP
header can be used to limit the range (or "scope") of a multicast
transmission.

2.3 Support for Datacasting Applications

Since 1992, the IETF has conducted a series of "audiocast" experiments
in which live audio and video were multicast from the IETF meeting site
to destinations around the world.  In this case, "datacasting" takes
compressed audio and video signals from the source station and transmits
them as a sequence of UDP packets to a group address.  Multicast
delivery today is not limited to audio and video.  Stock quote systems
are one example of a (connectionless) data-oriented multicast
application.  Someday reliable multicast transport protocols may
facilitate efficient inter-computer communication.  Reliable multicast
transport protocols are currently an active area of research and
development.

3. THE INTERNET'S MULTICAST BACKBONE (MBone)

The Internet Multicast Backbone (MBone) is an interconnected set of
subnetworks and routers that support the delivery of IP multicast
traffic.  The goal of the MBone is to construct a semipermanent IP
multicast testbed to enable the deployment of multicast applications
without waiting for the ubiquitous deployment of multicast-capable
routers in the Internet.

The MBone has grown from 40 subnets in four different countries in 1992,
to more than 2800 3400 subnets in over 25 countries by April 1996.   March 1997.  With
new multicast applications and multicast-based services appearing, it
seems likely that the use of multicast technology in the Internet will
keep growing at an ever-increasing rate.

The MBone is a virtual network that is layered on top of sections of the
physical Internet.  It is composed of islands of multicast routing
capability connected to other islands by virtual point-to-point links
called "tunnels."  The tunnels allow multicast traffic to pass through
the non-multicast-capable parts of the Internet.  Tunneled IP multicast
packets are encapsulated as IP-over-IP (i.e., the protocol number is set
to 4) so they look like normal unicast packets to intervening routers.
The encapsulation is added on entry to a tunnel and stripped off on exit
from a tunnel.  This set of multicast routers, their directly-connected
subnetworks, and the interconnecting tunnels comprise the MBone.

Since the MBone and the Internet have different topologies, multicast
routers execute a separate routing protocol to decide how to forward
multicast packets.  The majority of the MBone routers currently use the
Distance Vector Multicast Routing Protocol (DVMRP), although some
portions of the MBone execute either Multicast OSPF (MOSPF) or the
Protocol-Independent Multicast (PIM) routing protocols.  The operation
of each of these protocols is discussed later in this paper.

========================================================================

                              +++++++
                           / |Island | \
                         /T/ |   A   | \T\
                        /U/  +++++++++   +++++++    \U\
                      /N/        |         \N\
                    /N/          |           \N\
                  /E/            |             \E\
                /L/              |               \L\
         ++++++++            +++++++++             +++++++            ++++++++
        | Island |           | Island| ---------| Island |
        |    B   |           |   C   |   Tunnel |   D    |
++++++++++           +++++++++
         ++++++++             +++++++  --------- ++++++++
               \ \               |
                 \T\             |
                   \U\           |
                    \N\          |
                      \N\    +++++++++     +++++++
                        \E\  |Island |
                          \L\|   E   |
                    \+++++++++
              v                     v         |  _
        |_|-|- - >|Router| <- + - + - + -> |Router|<- -|-|_|
        '_' |                                          | '_'
         _  |                                          |  _
        |_|-|                                          |-|_|
        '_' |                                          | '_'
            v                                          v

LEGEND

<- - - -> Group Membership Protocol
<-+-+-+-> Multicast Routing Protocol

Figure 2: Internet 1: Multicast IP Delivery Service
=======================================================================

1.3 Multicast Routing Protocols

Multicast Backbone (MBone)

========================================================================

Since the MBone and the Internet have different topologies, multicast routers execute a separate multicast routing protocol to decide how to forward
multicast packets.  The majority of the MBone routers currently use define
delivery paths that enable the
Distance Vector forwarding of multicast datagrams
across an internetwork.

1.3.1  Multicast Routing Protocol (DVMRP), although some
portions of the MBone execute either vs. Multicast OSPF (MOSPF) or the
Protocol-Independent Forwarding

Multicast (PIM) routing protocols.  The operation
of each of these protocols is discussed later in this paper.

As multicast routing software features become more widely available on
the routers of establish or help establish the Internet, providers may gradually decide to use
"native" distribution
tree for a given group, which enables multicast as an alternative to using lots forwarding of tunnels.

The MBone carries audio and video multicasts packets
addressed to the group.  In the case of Internet Engineering
Task Force (IETF) meetings, NASA Space Shuttle Missions, US House and
Senate sessions, unicast, routing protocols are
also used to build a forwarding table (commonly called a routing table).
Unicast destinations are entered in the routing table, and live satellite weather photos.  The session
directory (SDR) tool provides users associated
with a listing of the active
multicast sessions on the MBone metric and allows them to create and/or join a session.

4. MULTICAST ADDRESSING

A next-hop router toward the destination.  The key
difference between unicast forwarding and multicast address forwarding is assigned to a set of receivers defining a
multicast group.  Senders use the that
multicast address as the destination
IP address of packets must be forwarded away from their source.  If a packet that is to be transmitted to all group members.

4.1 Class D Addresses

An IP multicast group
is identified by ever forwarded back toward its source, a Class D address.  Class D
addresses forwarding loop could have their high-order four bits set
formed, possibly leading to "1110" followed by a 28-bit multicast group ID.  Expressed "storm."

Each routing protocol constructs a forwarding table in standard "dotted-decimal"
notation, multicast group addresses range from 224.0.0.0 to
239.255.255.255 (shorthand:  224.0.0.0/4).

Figure 3 shows its own way; the format of
forwarding table tells each router that for a 32-bit Class D address.

========================================================================

      0 1 2 3                                                      31
     +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
     |1|1|1|0|                   Multicast Group ID                  |
     +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
             |------------------------28 bits------------------------|

Figure 3: Class D Multicast Address Format
========================================================================

The Internet Assigned Numbers Authority (IANA) maintains certain source, or for a list of
registered IP multicast groups.  The base address 224.0.0.0 is reserved
and cannot be assigned to any group.  The block of multicast addresses
ranging from 224.0.0.1
given source sending to 224.0.0.255 is reserved for permanent
assignment a certain group (called a (source, group) pair),
packets are expected to various uses, including routing protocols and other
protocols that require arrive on a well-known permanent address.  Multicast
routers should not forward any multicast datagram with destination
addresses certain "inbound" or "upstream"
interface and must be copied to certain (set of) "outbound" or
"downstream" interface(s) in this range, (regardless of order to reach all known subnetworks with
group members.

2. MULTICAST SUPPORT FOR EMERGING INTERNET APPLICATIONS

Today, the packet's TTL).

Some majority of the well-known groups include:

    "all systems on this subnet"       224.0.0.1
    "all routers Internet applications rely on this subnet"       224.0.0.2
    "all DVMRP routers"                224.0.0.4
    "all OSPF routers"                 224.0.0.5
    "all OSPF designated routers"      224.0.0.6
    "all RIP2 routers"                 224.0.0.9
    "all PIM routers"                  224.0.0.13 point-to-point
transmission.  The remaining groups ranging from 224.0.1.0 to 239.255.255.255 are
assigned to various multicast applications or remain unassigned.  From
this range, the addresses from 239.0.0.0 utilization of point-to-multipoint transmission has
traditionally been limited to 239.255.255.255 are being
reserved for site-local "administratively scoped" applications, not
Internet-wide local area network applications.

The complete list may be found in  Over the Assigned Numbers RFC (RFC 1700 or
its successor) or at
past few years the IANA Web Site:

<URL:http://www.isi.edu/div7/iana/assignments.html>

4.2 Mapping a Class D Address to an IEEE-802 MAC Address

The IANA Internet has been allocated seen a reserved portion of rise in the IEEE-802 MAC-layer
multicast address space.  All number of the addresses in IANA's reserved block
begin with 01-00-5E (hex).  A simple procedure was developed to map
Class D addresses to this reserved address block.  This allows new
applications that rely on multicast transmission.  Multicast IP
multicasting to easily take advantage of the hardware-level multicasting
supported
conserves bandwidth by network interface cards.

For example, forcing the mapping between a Class D IP address network to do packet replication only
when necessary, and offers an IEEE-802
(e.g., Ethernet) multicast address attractive alternative to unicast
transmission for the delivery of network ticker tapes, live stock
quotes, multiparty videoconferencing, and shared whiteboard applications
(among others). It is obtained by placing important to note that the low-order
23 bits applications for IP
Multicast are not solely limited to the Internet.  Multicast IP can also
play an important role in large commercial internetworks.

2.1 Reducing Network Load

Assume that a stock ticker application is required to transmit packets
to 100 stations within an organization's network.  Unicast transmission
to this set of stations will require the Class D address into periodic transmission of 100
packets where many packets may in fact be traversing the low-order 23 bits same link(s).
Multicast transmission is the ideal solution for this type of IANA's
reserved address block.

Figure 4 illustrates how
application since it requires only a single packet stream to be
transmitted by the source which is replicated at forks in the multicast group address 224.10.8.5
(E0-0A-08-05)
delivery tree.

Broadcast transmission is mapped into not an IEEE-802 effective solution for this type of
application since it affects the CPU performance of each and every
station that sees the packet.  Besides, it wastes bandwidth.

2.2 Resource Discovery

Some applications utilize multicast address.

======================================================================== instead of broadcast transmission
to transmit packets to group members residing on the same subnetwork.
However, there is no reason to limit the extent of a multicast
transmission to a single LAN.  The time-to-live (TTL) field in the IP
(hex); to be clear, the range from 01-00-5E-00-00-00
to 01-00-5E-FF-FF-FF is reserved for IP multicast groups.

A simple procedure was developed to map Class D Address: 224.10.8.5 (E0-0A-08-05)

                                |    E      0   |   0
                  Class-D addresses to this
reserved MAC-layer multicast address block.  This allows IP    |_______ _______|__ _ _ _
                     Address    |-+-+-+-+-+-+-+-|-+ - - -
                                |1 1 1 0 0 0 0 0|0
                                |-+-+-+-+-+-+-+-|-+ - - -
                                ...................
IEEE-802                           ....not.........
MAC-Layer                            ..............
Multicast                              ....mapped..
Address                                 ...........
|-+-+-+-+-+-+-+-|-+-+-+-+-+-+-+-|-+-+-+-+-+-+-+-|-+ - - -
|0 0 0 0 0 0 0 1|0 0 0 0 0 0 0 0|0 1 0 1 1 1 1 0|0
|-+-+-+-+-+-+-+-|-+-+-+-+-+-+-+-|-+-+-+-+-+-+-+-|-+ - - -
|_______ _______|_______ _______|_______ _______|_______
|   0       1   |   0       0   |   5       E   |   0

    [Address multicasting
to easily take advantage of the hardware-level multicasting supported by
network interface cards.

The mapping below continued from half above]

         |   0       A   |   0       8   |   0      5    |
         |_______ _______|_______ _______|_______ _______|    Class-D IP
 - - - +-|-+-+-+-+-+-+-+-|-+-+-+-+-+-+-+-|-+-+-+-+-+-+-+-|    Address
         |  0 0 0 1 0 1 0|0 0 0 0 1 0 0 0|0 0 0 0 0 1 0 1|
 - - - +-|-+-+-+-+-+-+-+-|-+-+-+-+-+-+-+-|-+-+-+-+-+-+-+-|
            \____________           ____________________/
                         \___   ___/
                             \ /
                              |
                   23 low-order bits mapped
                              |
                              v

 - - - +-|-+-+-+-+-+-+-+-|-+-+-+-+-+-+-+-|-+-+-+-+-+-+-+-|    IEEE-802
         |  0 0 0 1 0 1 0|0 0 0 0 1 0 0 0|0 0 0 0 0 1 0 1|    MAC-Layer
 - - - +-|-+-+-+-+-+-+-+-|-+-+-+-+-+-+-+-|-+-+-+-+-+-+-+-|    Multicast
         |_______ _______|_______ _______|_______ _______|    Address
         |   0       A   |   0       8   |   0       5   |

Figure 4: Mapping between a Class D IP address and an IEEE-802 Multicast Addresses
========================================================================

The mapping in Figure 4 places (e.g., FDDI,
Ethernet) MAC-layer multicast address is obtained by placing the low-order low-
order 23 bits of the IP multicast
group ID Class D address into the low order low-order 23 bits of the IEEE-802 multicast address.
IANA's reserved MAC-layer multicast address block.  This simple
procedure removes the need for an explicit protocol for multicast
address resolution on LANs akin to ARP for unicast.  All LAN stations
know this simple transformation, and can easily send any IP multicast
over any IEEE-802-based LAN.

Figure 4 illustrates how the multicast group address 234.138.8.5
(or EA-8A-08-05 expressed in hex) is mapped into an IEEE-802 multicast
address.  Note that the high-order nine bits of the IP address are not
mapped into the MAC-layer multicast address.

The mapping in Figure 4 places the low-order 23 bits of the IP multicast
group ID into the low order 23 bits of the IEEE-802 multicast address.
Note that the mapping may place up to 32 different multiple IP groups into the same
IEEE-802 address because the upper 5 five bits of the IP multicast
group ID class D address
are not used. For example, the multicast  Thus, there is a 32-to-1 ratio of IP class D addresses
224.138.8.5 (E0-8A-08-05) and 225.10.8.5 (E1-0A-08-05) would also be
mapped to the same IEEE-802 multicast address (01-00-5E-0A-08-05) used
in this example.

4.3 Transmission and Delivery of Multicast Datagrams

When the sender and receivers are members of the same (LAN) subnetwork,
the transmission and reception of
valid MAC-layer multicast frames addresses.  In practice, there is a straightforward
process.  The source station simply small
chance of collisions, should multiple groups happen to pick class D
addresses the IP packet that map to the same MAC-layer multicast group, address.  However,
chances are that higher-layer protocols will let hosts interpret which
packets are for them (i.e., the network interface card maps chances of two different groups picking
the Class same class D address to
the corresponding IEEE-802 multicast address, and the frame same set of UDP ports is sent.
Receivers that wish to capture extremely
unlikely).  For example, the frame notify their MAC class D addresses 224.10.8.5 (E0-0A-08-05)
and IP layers
that they want to receive datagrams addressed 225.138.8.5 (E1-8A-08-05) map to the group.

Things become somewhat more complex when the sender is attached to one
subnetwork and receivers reside on different subnetworks. In this case,
the routers must implement a multicast routing protocol that permits the
construction of multicast delivery trees and supports multicast packet
forwarding.  In addition, each router needs to implement a group
membership protocol that allows it to learn about the existence of group
members on its directly attached subnetworks.

5. INTERNET GROUP MANAGEMENT PROTOCOL (IGMP)

The Internet Group Management Protocol (IGMP) runs between hosts and
their immediately-neighboring multicast routers.  The mechanisms of the
protocol allow a host to inform its local router that it wishes to
receive transmissions addressed to a specific multicast group.  Also,
routers periodically query the LAN to determine if known group members
are still active.  If there is more than one router on the LAN
performing IP multicasting, one of the routers is elected "querier" and
assumes the responsibility of querying the LAN for group members.

Based on the group membership information learned from the IGMP, a
router is able to determine which (if any) multicast traffic needs to be
forwarded to each of its "leaf" subnetworks.  Multicast routers use this
information, in conjunction with a multicast routing protocol, to
support IP multicasting across the Internet.

5.1 IGMP Version 1

IGMP Version 1 was specified in RFC-1112.  According to the
specification, multicast routers periodically transmit Host Membership
Query messages to determine which host groups have members on their
directlyattached networks.  Query messages are addressed to the
all-hosts group (224.0.0.1) and have an IP TTL = 1.  This means that
Query messages sourced from a router are transmitted onto the

directly-attached subnetwork but are not forwarded by any other
multicast routers.

========================================================================

     Group 1                                   _____________________
       ____            ____                   |  multicast          |
      |    |          |    |                  |            router   |
      |_H2_|          |_H4_|                  |_____________________|
       ----            ----                      +-----+  |
         |               |                 <-----|Query|  |
         |               |                       +-----+  |
         |               |                                |
|---+----+-------+-------+--------+-----------------------+----|
    |            |                |
    |            |                |
  ____         ____              ____
 |    |       |    |            |    |
 |_H1_|       |_H3_|            |_H5_|
  ----         ----              ----
 Group 2      Group 1           Group 1
              Group 2

Figure 5: Internet Group Management Protocol-Query Message
========================================================================

When a host receives an IGMP Query message, it responds with a Host
Membership Report for each group to which it belongs, sent to each group
to which it belongs.  (This is an important point: While IGMP Queries
are sent to the "all hosts on this subnet" class D address (224.0.0.1),
IGMP Reports are sent to the group(s) to which the host(s) belong.
Reports have a TTL of 1, and thus are not forwarded beyond the local
subnetwork.)

In order to avoid a flurry of Reports, each host starts a randomly-
chosen Report delay timer for each of its group memberships.  If, during
the delay period, another Report is heard for the same group, each other
host in that group resets its timer to a new random value. This
procedure spreads Reports out over a period of time and minimizes Report
traffic for each group that has at least one member on a given
subnetwork.

It should be noted that multicast routers do not need to be directly
addressed since their interfaces are required to promiscuously receive
all multicast IP traffic.  Also, a router does not need to maintain a
detailed list of which hosts belong to each multicast group; the router

only needs to know that at least one group member is present on a given
network interface.

Multicast routers periodically transmit Queries to update their
knowledge of the group members present on each network interface. If the
router does not receive a Report from any members of a particular group
after a number of Queries, the router assumes that group members are no
longer present on an interface.  Assuming this is a leaf subnet, this
interface is removed from the delivery tree for this (source, group)
pair.  Multicasts will continue to be sent on this interface if the
router can tell (via multicast routing protocols) that there are
additional group members further downstream reachable via this
interface.

When a host first joins a group, it immediately transmits an IGMP Report
for the group rather than waiting for a router's IGMP Query.  This
reduces the "join latency" for the first host to join a given group on
a particular subnetwork.

5.2 IGMP Version 2

IGMP Version 2 was distributed as part of the IP Multicasting (Version
3.3 through Version 3.8) code package.  Initially, there was no detailed
specification for IGMP Version 2 other than this source code.  However,
the complete specification has recently been published in <draft-ietf-
idmr-igmp-v2-05.txt> which will update the informal specification
contained in Appendix I of RFC-1112.  IGMP Version 2 enhances and
extends IGMP Version 1 while maintaining backward compatibility with
Version 1 hosts.

IGMP Version 2 defines a procedure for the election of the multicast
querier for each LAN.  In IGMP Version 2, the router with the lowest IP
address on the LAN is elected the multicast querier.  In IGMP Version 1,
the querier election was determined by the multicast routing protocol.
This could lead to potential problems because each multicast routing
protocol might use unique methods for determining the multicast querier.

IGMP Version 2 defines a new type of Query message:  the Group-Specific
Query.  Group-Specific Query messages allow a router to transmit a Query
to a specific multicast group rather than all groups residing on a
directly attached subnetwork.

Finally, IGMP Version 2 defines a Leave Group message to lower IGMP's
"leave latency."  When the last host to respond to a Query with a Report
wishes to leave that specific group, the host transmits a Leave Group
message to the all-routers group (224.0.0.2) with the group field set to
the group to be left.  In response to a Leave Group message, the router
begins the transmission of Group-Specific Query messages on the inter-
face that received the Leave Group message.  If there are no Reports in
response to the Group-Specific Query messages, then if this is a leaf

subnet, this interface is removed from the delivery tree for this
(source, group) pair (as was the case of IGMP version 1).  Again,
multicasts will continue to be sent on this interface if the router can
tell (via multicast routing protocols) that there are additional group
members further downstream reachable via this interface.

5.3 IGMP Version 3

IGMP Version 3 is a preliminary draft specification published in
<draft-cain-igmp-00.txt>.  IGMP Version 3 introduces support for Group-
Source Report messages so that a host can elect to receive traffic from
specific sources of a multicast group.  An Inclusion Group-Source Report
message allows a host to specify the IP addresses of the specific
sources it wants to receive.  An Exclusion Group-Source Report message
allows a host to explicitly identify the sources that it does not want
to receive.  With IGMP Version 1 and Version 2, if a host wants to
receive any traffic for a group, the traffic from all sources for the
group must be forwarded onto the host's subnetwork.

IGMP Version 3 will help conserve bandwidth by allowing a host to select
the specific sources from which it wants to receive traffic.  Also,
multicast routing protocols will be able to make use this information to
conserve bandwidth when constructing the branches of their multicast
delivery trees.

Finally, support for Leave Group messages first introduced in IGMP
Version 2 has been enhanced to support Group-Source Leave messages.
This feature allows a host to leave an entire group or to specify the
specific IP address(es) of the (source, group) pair(s) that it wishes to
leave.

6. MULTICAST FORWARDING TECHNIQUES

IGMP provides the final step in a multicast packet delivery service
since it is only concerned with the forwarding of multicast traffic from
a router to group members on its directly-attached subnetworks.  IGMP is
not concerned with the delivery of multicast packets between neighboring
routers or across an internetwork.

To provide an internetwork delivery service, it is necessary to define
multicast routing protocols.  A multicast routing protocol is
responsible for the construction of multicast delivery trees and
enabling multicast packet forwarding.  This section explores a number of

different techniques that may potentially be employed by multicast
routing protocols:

    o "Simpleminded" Techniques
       - Flooding
       - Spanning Trees

    o  Source-Based Tree (SBT) Techniques
       - Reverse Path Broadcasting (RPB)
       - Truncated Reverse Path Broadcasting (TRPB)
       - Reverse Path Multicasting (RPM)

    o "Shared-Tree" Techniques

Later sections will describe how these algorithms are implemented in the
most prevalent multicast routing protocols in the Internet today  (e.g.,
Distance Vector Multicast Routing Protocol (DVMRP), Multicast extensions
to OSPF (MOSPF), Protocol-Independent Multicast (PIM), and Core-Based
Trees (CBT).

6.1 "Simpleminded" Techniques

Flooding and Spanning Trees are two algorithms that can be used to build
primitive multicast routing protocols.  The techniques are primitive due
to the fact that they tend to waste bandwidth or require a large amount
of computational resources within the multicast routers involved.  Also,
protocols built on these techniques may work for small networks with few
senders, groups, and routers, but do not scale well to larger numbers of
senders, groups, or routers.  Also, the ability to handle arbitrary
topologies may not be present or may only be present in limited ways.

6.1.1 Flooding

The simplest technique for delivering multicast datagrams to all routers
in an internetwork is to implement a flooding algorithm. The flooding
procedure begins when a router receives a packet that is addressed to a
multicast group.  The router employs a protocol mechanism to determine
whether or not it has seen this particular packet before.  If it is the
first reception of the packet, the packet is forwarded on all
interfaces--except the one on which it arrived--guaranteeing that the
multicast packet reaches all routers in the internetwork.  If the router
has seen the packet before, then the packet is discarded.

A flooding algorithm is very simple to implement since a router does not
have to maintain a routing table and only needs to keep track of the
most recently seen packets.  However, flooding does not scale for
Internet-wide applications since it generates a large number of
duplicate packets and uses all available paths across the internetwork
instead of just a limited number.  Also, the flooding algorithm makes
inefficient use of router memory resources since each router is required
to maintain a distinct table entry for each recently seen packet.

6.1.2 Spanning Tree

A more effective solution than flooding would be to select a subset of
the internetwork topology which forms a spanning tree.  The spanning
tree defines a structure in which only one active path connects any two
routers of the internetwork.  Figure 6 shows an internetwork and a
spanning tree rooted at router RR.

Once the spanning tree has been built, a multicast router simply
forwards each multicast packet to all interfaces that are part of the
spanning tree except the one on which the packet originally arrived.
Forwarding along the branches of a spanning tree guarantees that the
multicast packet will not loop and that it will eventually reach all
routers in the internetwork.

A spanning tree solution is powerful and would be relatively easy to
implement since there is a great deal of experience with spanning tree
protocols in the Internet community.  However, a spanning tree solution
can centralize traffic on a small number of links, and may not provide
the most efficient path between the source subnetwork and group members.
Also, it is computationally difficult to compute a spanning tree in
large, complex topologies.

6.2 Source-Based Tree Techniques

The following techniques all generate a source-based tree by various
means.  The techniques differ in the efficiency of the tree building
process, and the bandwidth and router resources (i.e., state tables)
used to build a source-based tree.

6.2.1 Reverse Path Broadcasting (RPB)

A more efficient solution than building a single spanning tree for the
entire internetwork would be to build a group-specific spanning tree for
each potential source [subnetwork].  These spanning trees would result
in source-based delivery trees emanating from the subnetwork directly
connected to the source station.  Since there are many potential sources
for a group, a different delivery tree is constructed emanating from
each active source.

6.2.1.1 Reverse Path Broadcasting: Operation

The fundamental algorithm to construct these source-based trees is
referred to as Reverse Path Broadcasting (RPB).  The RPB algorithm is
actually quite simple.  For each (source, group) pair, if a packet
arrives on a link that the local router believes to be on the shortest
path back toward the packet's source, then the router forwards the
packet on all interfaces except the incoming interface.  If the packet
does not arrive on the interface that is on the shortest path back

========================================================================

A Sample Internetwork

                     #----------------#
                   / |\              / \
                  |  | \           /    \
                  |  |   \       /       \
                  |  |    \    /          \
                  |  |      \ /            \
                  |  |       #------#       \
                  |  |      /       | \      \
                  |  |     /        |  \      \
                  |   \   /         |   \-------#
                  |    \ /          |     -----/|
                  |     #-----------#----/      |
                  |    /|\---    --/|    \      |
                  |   / |    \  /    \    \     |
                  |  /   \    /\     |     \   /
                  | /      \ /   \   |      \ /
                  #---------#--   \  |   ----#
                               \   \ |  /
                                \--- #-/

A Spanning Tree for this Sample Internetwork

                     #                #
                      \              /
                       \           /
                         \       /
                          \    /
                            \ /
                             #------RR
                                    | \
                                    |  \
                                    |   \-------#
                                    |
                        #-----------#----
                       /|           |    \
                      / |            \    \
                     /   \           |     \
                    /      \         |      \
                   #        #        |       #
                                     |
                                     #
LEGEND

#   Router
RR  Root Router

Figure 6: Spanning Tree
========================================================================

toward the source, then the packet is discarded.  The interface over
which the router expects to receive multicast packets from a particular
source is referred to as the "parent" link.  The outbound links over
which the router forwards the multicast packet are called "child" links
for this group.

This basic algorithm can be enhanced to reduce unnecessary packet
duplication.  If the local router making the forwarding decision can
determine whether a neighboring router on a child link is "downstream,"
then the packet is multicast toward the neighbor. (A "downstream"
neighbor is a neighboring router which considers the local router to be
on the shortest path back toward a given source.) Otherwise, the packet
is not forwarded on the potential child link since the local router
knows that the neighboring router will just discard the packet (since it
will arrive on a non-parent link for the (source, group) pair, relative
to that downstream router).

========================================================================

                 Source
                    .   ^
                    .   |    shortest path back to the
                    .   |     source for THIS router
                    .   |
               "parent link"
                    _
            ______|!2|_____
           |               |
--"child -|!1|           |!3| - "child --
    link"  |    ROUTER     |      link"
           |_______________|

Figure 7: Reverse Path Broadcasting - Forwarding Algorithm
========================================================================

The information to make same IEEE-802 MAC-layer
multicast address (01-00-5E-0A-08-05) used in this "downstream" decision is relatively easy to
derive from a link-state routing protocol since each router maintains a
topological database for the entire routing domain.  If a distance-
vector routing protocol is employed, a neighbor can either advertise its
previous hop for the (source, group) pair as part of its routing update
messages or "poison reverse" the route toward a source if it is not on
the distribution tree for that source.  Either of these techniques
allows an upstream router to determine if a downstream neighboring
router is on an active branch of the delivery tree for a certain source
sending to a certain group.

Please refer to Figure 8 for a discussion describing the basic operation
of the enhanced RPB algorithm.

======================================================================

 Source Station------>O example.

========================================================================

   Class D Address: 234.138.8.5 (EA-8A-08-05)

                                |    E      A #
                     +|+
                    +   | +
                   +  O  +
                  +       +
                 1         2
                +           +
               +             +
              +               +
          B  +                 +  C
          O-#-   8
                  Class-D IP    |_______ _______|__ _ _ _
                     Address    |-+-+-+-+-+-+-+-|-+ - - - -3-
                                |1 1 1 0 1 0 1 0|1
                                |-+-+-+-+-+-+-+-|-+ - - - -#-O
           +|+                 -|+
          + | +
                                ...................
IEEE-802                           ....not.........
MAC-Layer                            ..............
Multicast                              ....mapped..
Address                                 ...........
|-+-+-+-+-+-+-+-|-+-+-+-+-+-+-+-|-+-+-+-+-+-+-+-|-+ - | +
         +  O  + -  O  +
        +       + -       +
       +         +
|0 0 0 0 0 0 0 1|0 0 0 0 0 0 0 0|0 1 0 1 1 1 1 0|0
|-+-+-+-+-+-+-+-|-+-+-+-+-+-+-+-|-+-+-+-+-+-+-+-|-+ -         +
      4           5       6           7
     +             + -             +
    +               + -
|_______ _______|_______ _______|_______ _______|_______
|   0       1   |   0       0   |   5       E -               +
   +                 + -                 +
D #-   |   0

    [Address mapping below continued from half above]

         |   8       A   |   0       8   |   0      5    |
         |_______ _______|_______ _______|_______ _______|    Class-D IP
  - - - -8- -|-+-+-+-+-+-+-+-|-+-+-+-+-+-+-+-|-+-+-+-+-+-+-+-|    Address
         |  0 0 0 1 0 1 0|0 0 0 0 1 0 0 0|0 0 0 0 0 1 0 1|
  - - - -#- -|-+-+-+-+-+-+-+-|-+-+-+-+-+-+-+-|-+-+-+-+-+-+-+-|
            \____________           ____________________/
                         \___   ___/
                             \ /
                              |
                   23 low-order bits mapped
                              |
                              v

  - - - -9- -|-+-+-+-+-+-+-+-|-+-+-+-+-+-+-+-|-+-+-+-+-+-+-+-|    IEEE-802
         |  0 0 0 1 0 1 0|0 0 0 0 1 0 0 0|0 0 0 0 0 1 0 1|    MAC-Layer
  - - - -# F -|-+-+-+-+-+-+-+-|-+-+-+-+-+-+-+-|-+-+-+-+-+-+-+-|    Multicast
         |_______ _______|_______ _______|_______ _______|    Address
         |   0       A   |   0       8   |   0       5   |
  O                   O                   O

LEGEND

O   Leaf
+ + Shortest-path
- - Branch
#   Router

Figure 8: Reverse Path Broadcasting - Example
=======================================================================

Note that 4: Mapping between Class D and IEEE-802 Multicast Addresses
========================================================================

4.3 Transmission and Delivery of Multicast Datagrams

When the sender and receivers are members of the same (LAN) subnetwork,
the transmission and reception of multicast frames is a straightforward
process.  The source station (S) simply addresses the IP packet to the
multicast group, the network interface card maps the Class D address to
the corresponding IEEE-802 multicast address, and the frame is sent.
Receivers that wish to capture the frame notify their MAC and IP layers
that they want to receive datagrams addressed to the group.

Things become somewhat more complex when the sender is attached to a leaf one
subnetwork
directly connected to Router A.  For this example, we will look at and receivers reside on different subnetworks. In this case,
the
RPB algorithm from Router B's perspective. Router B receives routers must implement a multicast routing protocol that permits the
construction of multicast delivery trees and supports multicast packet from Router A on link 1.  Since Router B considers link
1
forwarding.  In addition, each router needs to be the parent link for the (source, group) pair, implement a group
membership protocol that allows it forwards to learn about the
packet existence of group
members on link 4, link 5, its directly attached subnetworks.

5. INTERNET GROUP MANAGEMENT PROTOCOL (IGMP)

The Internet Group Management Protocol (IGMP) runs between hosts and
their immediately-neighboring multicast routers.  The mechanisms of the
protocol allow a host to inform its local leaf subnetworks router that it wishes to
receive transmissions addressed to a specific multicast group.  Also,
routers periodically query the LAN to determine if they contain any group members.  Router B does not forward members are
still active.  If there is more than one IP multicast router on the packet LAN,
one of the routers is elected "querier" and assumes the responsibility of
querying the LAN for the presence of any group members.

Based on link 3 because

it knows the group membership information learned from routing protocol exchanges that Router C considers link 2
as the IGMP, a
router is able to determine which (if any) multicast traffic needs to be
forwarded to each of its parent link for "leaf" subnetworks.  Multicast routers use this
information, in conjunction with a multicast routing protocol, to
support IP multicasting across the (source, group) pair.  Router B knows that if
it were Internet.

5.1 IGMP Version 1

IGMP Version 1 was specified in RFC-1112.  According to forward the packet on link 3, it would be discarded by Router
C since the packet would not be arriving
specification, multicast routers periodically transmit Host Membership
Query messages to determine which host groups have members on Router C's parent link for
this (source, group) pair.

6.2.1.2 RPB: Benefits and Limitations

The key benefit their
directly-attached networks.  IGMP Query messages are addressed to reverse path broadcasting is that it is reasonably
efficient the
all-hosts group (224.0.0.1) and easy to implement.  It does not require have an IP TTL = 1.  This means that the
Query messages sourced from a router
know about are transmitted onto the entire spanning tree, nor does
directly-attached subnetwork but are not forwarded by any other
multicast routers.

When a host receives an IGMP Query message, it require responds with a special
mechanism Host
Membership Report for each group to stop the forwarding process (as flooding does).  In
addition, which it guarantees efficient delivery since belongs, sent to each group
to which it belongs.  (This is an important point:  While IGMP Queries

========================================================================

     Group 1                                   _____________________
       ____            ____                   |  multicast packets
always follow          |
      |    |          |    |                  |            router   |
      |_H2_|          |_H4_|                  |_____________________|
       ----            ----                      +-----+  |
         |               |                 <-----|Query|  |
         |               |                       +-----+  |
         |               |                                |
|---+----+-------+-------+--------+-----------------------+----|
    |            |                |
    |            |                |
  ____         ____              ____
 |    |       |    |            |    |
 |_H1_|       |_H3_|            |_H5_|
  ----         ----              ----
 Group 2      Group 1           Group 1
              Group 2

Figure 5: Internet Group Management Protocol-Query Message
========================================================================

are sent to the "shortest" path from "all hosts on this subnet" class D address (224.0.0.1),
IGMP Reports are sent to the source station group(s) to which the
destination group.  Finally, host(s) belong.
IGMP Reports, like Queries, are sent with the packets IP TTL = 1, and thus are distributed over multiple
links, resulting in better network utilization since
not forwarded beyond the local subnetwork.)

In order to avoid a different tree is
computed flurry of Reports, each host starts a randomly-
chosen Report delay timer for each (source, group) pair.

One of the major limitations of its group memberships.  If, during
the RPB algorithm delay period, another Report is heard for the same group, every
other host in that it does not
take into account multicast group membership when building the delivery
tree for must reset its timer to a (source, group) pair.  As new random value.
This procedure spreads Reports out over a result, datagrams may be
unnecessarily forwarded to subnetworks that have no members in the
destination group.

6.2.2 Truncated Reverse Path Broadcasting (TRPB)

Truncated Reverse Path Broadcasting (TRPB) was developed to overcome the
limitations of Reverse Path Broadcasting.  With the help period of IGMP,
multicast routers determine the group memberships on each leaf
subnetwork time and avoid forwarding datagrams onto a leaf subnetwork if it
does not contain thus
minimizes Report traffic for each group that has at least one member of the destination group.  Thus,
the delivery tree is "truncated" by the router if on
a given subnetwork.

It should be noted that multicast routers do not need to be directly
addressed since their interfaces are required to promiscuously receive
all multicast IP traffic.  Also, a leaf subnetwork has
no group members.

Figure 9 illustrates the operation of TRPB algorithm.  In this example
the router receives does not need to maintain a
detailed list of which hosts belong to each multicast packet on its parent link for group; the
(Source, G1) pair.  The router forwards the datagram on interface 1
since
only needs to know that interface has at least one group member of G1. The router does not
forward the datagram to interface 3 since this interface has no members
in the destination group.  The datagram is forwarded present on interface 4 if
and only if a downstream router considers this subnetwork given
network interface.

Multicast routers periodically transmit IGMP Queries to be part of
its "parent link" for the (Source, G1) pair.

TRPB removes some limitations of RPB but it solves only part update their
knowledge of the
problem.  It eliminates unnecessary traffic group members present on leaf subnetworks but it each network interface.  If
the router does not consider receive a Report from any members of a particular
group memberships when building the branches after a number of Queries, the
delivery tree.

======================================================================

                      Source
                          .   |
                          .   |
                          .   |     (Source, G1)
                          .   v
                          |
                     "parent link"
                          |
        "child link"     ___
    G1            _______|2|_____
     \           |               |
   G3\\ _____   ___    ROUTER   ___      ______ / G2
      \| hub |--|1|             |3|-----|switch|/
      /|_____|  ^--     ___     -- ^    |______|\
      /        ^ |______|4|_____|   ^          \
    G1       ^         ^---            ^         G3
            ^        ^   |              ^
        Forward->->-^ "child link"  Truncate
                         |

Figure 9: Truncated Reverse Path Broadcasting - (TRPB)
======================================================================

6.2.3 Reverse Path Multicasting (RPM)

Reverse Path Multicasting (RPM) is an enhancement to Reverse Path
Broadcasting and Truncated Reverse Path Broadcasting.

RPM creates a delivery tree router assumes that spans only:

    o  Subnetworks group members

are no longer present on an interface.  Assuming this is a leaf subnet
(i.e., a subnet with group members, and

    o  Routers and subnetworks along the shortest
       path members but no multicast routers connecting
to subnetworks with additional group members.

RPM allows members further downstream), this interface is
removed from the source-based "shortest-path" tree delivery tree(s) for this group.  Multicasts will
continue to be pruned so that
datagrams are sent on this interface only forwarded along branches if the router can tell (via
multicast routing protocols) that lead to active there are additional group members
further downstream reachable via this interface.

When a host first joins a group, it immediately transmits an IGMP Report
for the group rather than waiting for a router's IGMP Query.  This
reduces the "join latency" for the first host to join a given group on
a particular subnetwork.  "Join latency" is measured from the time when
a host's first IGMP Report is sent, until the transmission of the first
packet for that group onto that host's subnetwork.  Of course, if the
group is already active, the join latency is precisely zero.

5.2 IGMP Version 2

IGMP version 2 was distributed as part of the Distance Vector Multicast
Routing Protocol (DVMRP) implementation ("mrouted") source code, from
version 3.3 through 3.8.  Initially, there was no detailed specification
for IGMP version 2 other than this source code.  However, the complete
specification has recently been published in <draft-ietf-idmr-igmp-
v2-06.txt> which will update the specification contained in the first
appendix of RFC-1112.  IGMP version 2 extends IGMP version 1 while
maintaining backward compatibility with version 1 hosts.

IGMP version 2 defines a procedure for the election of the destination group.

6.2.3.1 Operation

When a multicast router receives a packet
querier for a (source, group) pair,
the first packet is forwarded following each LAN.  In IGMP version 2, the TRPB algorithm across all
routers in multicast router with the internetwork.  Routers
lowest IP address on the edge of LAN is elected the network (which
have only leaf subnetworks) are called leaf routers.  The TRPB algorithm
guarantees that each leaf router will receive at least multicast querier.  In IGMP
version 1, the querier election was determined by the first multicast packet.  If there is routing
protocol.

IGMP version 2 defines a group member on one new type of its leaf

subnetworks, Query message:  the Group-Specific
Query.  Group-Specific Query messages allow a leaf router forwards the packet based to transmit a Query
to a specific multicast group rather than all groups residing on this a
directly attached subnetwork.

Finally, IGMP version 2 defines a Leave Group message to lower IGMP's
"leave latency."  When the last host to respond to a Query with a Report
(or
wishes to leave that specific group, the host transmits a statically-defined local group on an interface).

========================================================================

                   Source
                      . |
                      . | (Source, G)
                      . |
                      | v
                      |
                    o-#-G
                      |**********
                    ^ |         *
                    , |         *
                    ^ |         *  o
                    , |         * /
                    o-#-o       #***********
                    ^ |\      ^ |\         *
                    ^ | o     ^ | G        *
                    , |       , |          *
                    ^ |       ^ |          *
                    , |       , |          *
                      #         #          #
                     /|\       /|\        /|\
                    o o o     o o o      G o G
LEGEND

 #    Router
 o    Leaf without Leave Group
message to the all-routers group member
 G    Leaf (224.0.0.2) with group member
***   Active Branch
---   Pruned Branch
,>,   Prune Message (direction of flow -->

Figure 10: Reverse Path Multicasting  (RPM)
========================================================================

If none of the subnetworks connected group field set to
the leaf router contain group
members, the leaf router may transmit being left.  In response to a "prune" message on its parent
link, informing Leave Group message, the upstream router that it should not forward packets
for this particular (source, group) pair
begins the transmission of Group-Specific Query messages on the child
interface on which
it that received the prune Leave Group message.  Prune messages  If there are no
Reports in response to the Group-Specific Query messages, then (if this
is a leaf subnet) this interface is removed from the delivery tree(s)
for this group (as was the case of IGMP version 1).  Again, multicasts
will continue to be sent just one hop
back toward on this interface if the source.

An upstream router receiving can tell (via
multicast routing protocols) that there are additional group members
further downstream reachable via this interface.

"Leave latency" is measured from a prune message router's perspective.  In version 1
of IGMP, leave latency was the time from a router's hearing the last
Report for a given group, until the router aged out that interface from
the delivery tree for that group (assuming this is required to store a leaf subnet, of
course).  Note that the
prune information in memory.  If only way for the upstream router has to tell that this was
the LAST group member is that no recipients
on local leaf subnetworks and has received prune messages reports are heard in some multiple of
the Query Interval (this is on each the order of minutes).  IGMP version 2,
with the
child interfaces addition of the Leave Group message, allows a group member to
more quickly inform the router that it is done receiving traffic for this (source, group) pair, a
group.  The router then must determine if this host was the last member
of this group on this subnetwork.  To do this, the upstream router

does not need to receive additional packets quickly
queries the subnetwork for other group members via the (source, group)
pair.  This implies that Group-Specific
Query message.  If no members send reports after several of these Group-
Specific Queries, the upstream router can also generate a prune
message infer that the last member of its own, one hop further back toward that
group has, indeed, left the source.  This
cascade subnetwork.  The benefit of lowering the
leave latency is that prune messages results in an active multicast delivery tree,
consisting exclusively can be sent as soon as possible
after the last member host drops out of "live" branches (i.e., branches that lead to
active receivers).

Since both the group, instead of having to
wait for several minutes worth of Query intervals to pass.  If a group membership and internetwork topology
was experiencing high traffic levels, it can change
dynamically , be very beneficial to stop
transmitting data for this group as soon as possible.

5.3 IGMP Version 3

IGMP version 3 is a preliminary draft specification published in
<draft-cain-igmp-00.txt>.  IGMP version 3 introduces support for Group-
Source Report messages so that a host can elect to receive traffic from
specific sources of a multicast group.  An Inclusion Group-Source Report
message allows a host to specify the pruned state IP addresses of the multicast delivery tree must be
refreshed periodically.  At regular intervals, specific
sources it wants to receive.  An Exclusion Group-Source Report message
allows a host to explicitly identify the sources that it does not want
to receive.  With IGMP version 1 and version 2, if a host wants to
receive any traffic for a group, the prune information
expires traffic from the memory of all routers and the next packet sources for the
(source, group) pair is
group must be forwarded toward all downstream routers.  This
results in a new burst of prune messages onto the host's subnetwork.

IGMP version 3 will help conserve bandwidth by allowing a host to select
the specific sources from which it wants to receive traffic.  Also,
multicast
forwarding tree routing protocols will be able to adapt make use this information to
conserve bandwidth when constructing the ever-changing branches of their multicast
delivery
requirements of the internetwork.

6.2.3.2 Limitations

Despite the improvements offered by the RPM algorithm, there are still
several scaling issues that need trees.

Finally, support for Leave Group messages first introduced in IGMP
version 2 has been enhanced to be addressed when attempting support Group-Source Leave messages.
This feature allows a host to
develop leave an Internet-wide delivery service.  The first limitation is that
multicast packets must be periodically flooded across every router in entire group or to specify the internetwork, onto every leaf subnetwork.  This flooding is wasteful
specific IP address(es) of bandwidth (until the updated prune state is constructed).

This "flood and prune" paradigm is very powerful, but (source, group) pair(s) that it wastes
bandwidth and does not scale well, especially if there are receivers wishes
to leave.  Note that at this time, not all existing multicast routing
protocols have mechanisms to support such requests from group members.
This is one issue that will be addressed during the edge development of
IGMP version 3.

6. MULTICAST FORWARDING TECHNIQUES

IGMP provides the delivery tree which are connected via low-speed
technologies (e.g., ISDN or modem).  Also, note that every router
participating final step in the RPM algorithm must either have a forwarding table
entry for a (source, group) pair, or have prune state information for
that (source, group) pair.

It multicast packet delivery service
since it is clearly wasteful (especially as only concerned with the number forwarding of active sources and
groups increase) to place such multicast traffic from
a burden router to group members on routers that are its directly-attached subnetworks.  IGMP is
not on every
(or perhaps any) active concerned with the delivery tree.  Shared tree techniques are an
attempt to address these scaling issues, which become quite acute when
most groups' senders and receivers are sparsely distributed of multicast packets between neighboring
routers or across the an internetwork.

6.3 Shared Tree Techniques

The most recent additions

To provide an internetwork delivery service, it is necessary to define
multicast routing protocols.  A multicast routing protocol is
responsible for the set construction of multicast forwarding techniques
are based on a shared delivery tree.  Unlike shortest-path tree
algorithms which build trees and
enabling multicast packet forwarding.  This section explores a source-based tree for each (source, group)
pair, shared tree number of
different techniques that may potentially be employed by multicast
routing protocols:

    o "Simpleminded" Techniques
       - Flooding
       - Spanning Trees

    o  Source-Based Tree (SBT) Techniques
       - Reverse Path Broadcasting (RPB)
       - Truncated Reverse Path Broadcasting (TRPB)
       - Reverse Path Multicasting (RPM)

    o "Shared-Tree" Techniques

Later sections will describe how these algorithms construct a single delivery tree that is
shared by all members of a group.  The shared tree approach is quite
similar to are implemented in the spanning tree algorithm except it allows
most prevalent multicast routing protocols in the definition
of a different shared tree for each group.  Stations Internet today  (e.g.,
Distance Vector Multicast Routing Protocol (DVMRP), Multicast extensions
to OSPF (MOSPF), Protocol-Independent Multicast (PIM), and Core-Based
Trees (CBT).

6.1 "Simpleminded" Techniques

Flooding and Spanning Trees are two algorithms that wish can be used to
receive traffic for a build
primitive multicast group routing protocols.  The techniques are required primitive due
to explicitly join

the shared delivery tree.  Multicast traffic for each group is sent and
received over the same delivery tree, regardless of the source.

6.3.1 Operation

A shared tree may involve a single router, fact that they tend to waste bandwidth or set require a large amount
of routers, which
comprise computational resources within the "core" of a multicast delivery tree.  Figure 11 illustrates
how a single multicast delivery tree is shared by all sources and
receivers for a multicast group.

========================================================================

               Source        Source        Source
                  |             |             |
                  |             |             |
                  v             v             v

                 [#] * * * * * [#] * * * * * [#]
                                *
                  ^             *             ^
                  |             *             |
             join |             *             | join
                  |            [#]            |
                 [x]                         [x]
                  :                           :
                member                      member
                 host                        host

LEGEND

[#]  Shared Tree Core Routers
* *  Shared Tree Backbone
[x]  Member-hosts' directly-attached routers

Figure 11: Shared Multicast Delivery Tree

======================================================================== involved.  Also,
protocols built on these techniques may work for small networks with few
senders, groups, and routers, but do not scale well to larger numbers of
senders, groups, or routers.  Also, the ability to handle arbitrary
topologies may not be present or may only be present in limited ways.

6.1.1 Flooding

The directly attached router simplest technique for each station wishing delivering multicast datagrams to belong all routers
in an internetwork is to implement a
particular multicast group flooding algorithm. The flooding
procedure begins when a router receives a packet that is required addressed to send a "join" message toward
the shared tree of the particular
multicast group.  The directly
attached router only needs employs a protocol mechanism to know determine
whether or not it has seen this particular packet before.  If it is the address
first reception of the packet, the packet is forwarded on all interfaces

(except the one of on which it arrived) guaranteeing that the group's
core multicast
packet reaches all routers in order the internetwork.  If the router has seen
the packet before, then the packet is discarded.

A flooding algorithm is very simple to transmit implement since a join request (via unicast).  The
join request is processed by router does not
have to maintain a routing table and only needs to keep track of the
most recently seen packets.  However, flooding does not scale for
Internet-wide applications since it generates a large number of
duplicate packets and uses all intermediate routers, available paths across the internetwork
instead of just a limited number.  Also, the flooding algorithm makes
inefficient use of router memory resources since each router is required
to maintain a distinct table entry for each recently seen packet.

6.1.2 Spanning Tree

A more effective solution than flooding would be to select a subset of which
identifies
the interface on internetwork topology which the join was received as belonging to
the group's delivery forms a spanning tree.  The intermediate spanning
tree defines a structure in which only one active path connects any two
routers continue to forward
the join message toward the core, marking local downstream interfaces

until of the request reaches internetwork.  Figure 6 shows an internetwork and a core
spanning tree rooted at router (or RR.

Once the spanning tree has been built, a multicast router that is already on
the active delivery tree).  This procedure allows simply
forwards each member-host's
directly-attached router multicast packet to define a branch providing the shortest path
between itself and a core router which is all interfaces that are part of the group's shared
delivery tree.

Similar to other multicast forwarding algorithms, shared
spanning tree algorithms
do not require that except the source one on which the packet originally arrived.
Forwarding along the branches of a spanning tree guarantees that the
multicast packet will not loop and that it will eventually reach all
routers in the internetwork.

A spanning tree solution is powerful and would be relatively easy to
implement since there is a member great deal of experience with spanning tree
protocols in the Internet community.  However, a
destination group.  Packets sourced by spanning tree solution
can centralize traffic on a non-group member are simply
unicast toward small number of links, and may not provide
the most efficient path between the source subnetwork and group members.
Also, it is computationally difficult to compute a spanning tree in
large, complex topologies.

6.2 Source-Based Tree Techniques

The following techniques all generate a source-based tree by various
means.  The techniques differ in the efficiency of the core until they reach tree building
process, and the first bandwidth and router that is resources (i.e., state tables)
used to build a
member of the group's delivery source-based tree.  When the unicast packet reaches

6.2.1 Reverse Path Broadcasting (RPB)

A more efficient solution than building a
member of the delivery tree, single spanning tree for the packet is multicast
entire internetwork would be to all outgoing
interfaces that are part of the build a spanning tree except for each potential
source [subnetwork].  These spanning trees would result in source-based
delivery trees emanating from the incoming link.  This
guarantees that traffic follows subnetworks directly connected to the shortest path from

========================================================================

A Sample Internetwork

                     #----------------#
                   / |\              / \
                  |  | \           /    \
                  |  |   \       /       \
                  |  |    \    /          \
                  |  |      \ /            \
                  |  |       #------#       \
                  |  |      /       | \      \
                  |  |     /        |  \      \
                  |   \   /         |   \-------#
                  |    \ /          |     -----/|
                  |     #-----------#----/      |
                  |    /|\---    --/|    \      |
                  |   / |    \  /    \    \     |
                  |  /   \    /\     |     \   /
                  | /      \ /   \   |      \ /
                  #---------#--   \  |   ----#
                               \   \ |  /
                                \--- #-/

A Spanning Tree for this Sample Internetwork

                     #                #
                      \              /
                       \           /
                         \       /
                          \    /
                            \ /
                             #------RR
                                    | \
                                    |  \
                                    |   \-------#
                                    |
                        #-----------#----
                       /|           |    \
                      / |            \    \
                     /   \           |     \
                    /      \         |      \
                   #        #        |       #
                                     |
                                     #
LEGEND

#   Router
RR  Root Router

Figure 6: Spanning Tree
========================================================================

source station to
the shared tree. It also ensures that multicast packets stations.  Since there are forwarded to
all routers on the core tree which in turn forward the traffic to all
receivers that have joined the shared tree.

6.3.2 Benefits

In terms of scalability, shared tree techniques have several advantages
over source-based trees.  Shared tree algorithms make efficient use of
router resources since they only require a router to maintain state
information many potential sources for each a group, not for each (source, group) pair. (Remember
that source-based tree techniques required all routers in an
internetwork to either be a) on the delivery tree for a given (source,
group) pair, or b) have prune state for that (source, group) pair:  So
the entire internetwork must participate in the source-based
different delivery tree
protocol.)  This improves the scalability of applications with many
active senders since the number of source stations is no longer a
scaling issue.  Also, shared tree algorithms conserve network bandwidth
since they do not require that multicast packets be periodically flooded
across all multicast routers in the internetwork onto every leaf
subnetwork.  This can offer significant bandwidth savings, especially
across low-bandwidth WAN links, and when receivers sparsely populate the
domain of operation.  Finally, since receivers are required to
explicitly join the shared delivery tree, data only ever flows over
those links that lead to constructed rooted at each active receivers.

6.3.3 Limitations

Despite source.

6.2.1.1 Reverse Path Broadcasting: Operation

The fundamental algorithm to construct these benefits, there are still several limitations source-based trees is
referred to protocols
that are based as Reverse Path Broadcasting (RPB).  The RPB algorithm is
actually quite simple.  For each source, if a packet arrives on a shared tree algorithm.  Shared trees may result in
traffic concentration and bottlenecks near core routers since traffic
from all sources traverses link
that the same set of links as it approaches local router believes to be on the
core.  In addition, a single shared delivery tree may create suboptimal
routes (a shortest path between back toward
the source and packet's source, then the shared tree, a
suboptimal path across router forwards the packet on all
interfaces except the incoming interface.  If the packet does not
arrive on the interface that is on the shared tree, a shortest path between back toward the
source, then the packet is discarded.  The interface over which the
egress core
router and expects to receive multicast packets from a particular source is
referred to as the receiver's directly attached router)
resulting in increased delay "parent" link.  The outbound links over which may the
router forwards the multicast packet are called "child" links for this
source.

This basic algorithm can be enhanced to reduce unnecessary packet
duplication.  If the local router making the forwarding decision can
determine whether a critical issue for some
multimedia applications.  (Simulations indicate that latency over neighboring router on a

shared tree may child link is "downstream,"
then the packet is multicast toward the neighbor.  (A "downstream"
neighbor is a neighboring router which considers the local router to be approximately 10% larger than source-based trees in
many cases, but by
on the shortest path back toward a given source.)  Otherwise, the packet
is not forwarded on the potential child link since the same token, this may be negligible for many
applications.)  Finally, new algorithms need to be developed to support
all aspects of core management which include core local router selection and
(potentially) dynamic placement strategies.

7. SOURCE-BASED TREE ("DENSE MODE") ROUTING PROTOCOLS

An established set of multicast routing protocols define
knows that the neighboring router will just discard the packet (since it
will arrive on a source-based
delivery tree which provides non-parent link for the source, relative to that
downstream router).

========================================================================

                                 Source
                                    |   ^
                                    |   :    shortest path between back to the
                                    |   :     source and
each receiver.

These routing protocols include:

o  Distance Vector Multicast Routing Protocol (DVMRP),

o  Multicast Extensions to Open Shortest for THIS router
                                    |   :
                               "parent link"
                                    _
                            ______|!2|_____
                           |               |
                --"child -|!1|           |!3| - "child --
                    link"  |    ROUTER     |      link"
                           |_______________|

Figure 7: Reverse Path First (MOSPF),

o  Protocol Independent Multicast Broadcasting - Dense Mode (PIM-DM).

Each of these routing protocols is designed Forwarding Algorithm
========================================================================

The information to operate in an environment
where group members are relatively densely populated and internetwork
bandwidth make this "downstream" decision is plentiful.  Their underlying designs assume that the amount
of relatively easy to
derive from a link-state routing protocol overhead (in terms of the amount of state that must be
maintained by since each router, the number of router CPU cycles required, and maintains a
topological database for the amount of bandwidth consumed by entire routing domain.  If a distance-
vector routing protocol operation) is appropriate
since receivers densely populate the area of operation.

7.1. Distance Vector Multicast Routing Protocol (DVMRP)

The Distance Vector Multicast Routing Protocol (DVMRP) is employed, a
distance-vector neighbor can either advertise its
previous hop for the source as part of its routing protocol designed to support update messages or
"poison reverse" the forwarding route toward a source if it is not on the
distribution tree for that source.  Either of
multicast datagrams through these techniques allows an internetwork.  DVMRP constructs
source-based multicast delivery trees using variants
upstream router to determine if a downstream neighboring router is on an
active branch of the Reverse Path
Broadcasting (RPB) algorithm.  Originally, the entire MBone ran DVMRP.
Today, over half of delivery tree for a certain source.

Please refer to Figure 8 for a discussion describing the MBone routers still run some version basic operation
of DVMRP.

DVMRP was first defined in RFC-1075.  The original specification was
derived from the Routing Information Protocol (RIP) and employed the
Truncated enhanced RPB algorithm.

========================================================================

                 Source Station------>O
                                    A #
                                     +|+
                                    + | +
                                   +  O  +
                                  +       +
                                 1         2
                                +           +
                               +             +
                              +               +
                          B  +                 +  C
                          O-#- - - - -3- - - - -#-O
                           +|+                 -|+
                          + | +               - | +
                         +  O  +             -  O  +
                        +       +           -       +
                       +         +         -         +
                      4           5       6           7
                     +             +     -             +
                    +               + E -               +
                   +                 + -                 +
                D #- - - - -8- - - - -#- - - - -9- - - - -# F
                  |                   |                   |
                  O                   O                   O

LEGEND

O   Leaf
+ + Shortest-path
- - Branch
#   Router

Figure 8: Reverse Path Broadcasting (TRPB) technique. The major
difference between RIP and DVMRP is that RIP was concerned with
calculating the next-hop to a destination, while DVMRP is concerned with
computing Broadcasting - Example
========================================================================

Note that the previous-hop back source station (S) is attached to a source.  It is important leaf subnetwork
directly connected to note
that Router A.  For this example, we will look at the latest mrouted version 3.8 and vendor implementations have
extended DVMRP
RPB algorithm from Router B's perspective. Router B receives the
multicast packet from Router A on link 1.  Since Router B considers link
1 to employ be the Reverse Path Multicasting (RPM) algorithm.
This means that parent link for the latest implementations of DVMRP are quite different
from (source, group) pair, it forwards the original RFC specification in many regards.  There is an active
effort within
packet on link 4, link 5, and the IETF Inter-Domain Multicast Routing (IDMR) working local leaf subnetworks if they contain
group to specify DVMRP version members.  Router B does not forward the packet on link 3 in a standard form (as opposed because
it knows from routing protocol exchanges that Router C considers link 2
as its parent link for the source.  Router B knows that if it were to
forward the
current spec, which is written in C).

The current DVMRP v3 Internet-Draft is:

    <draft-ietf-idmr-dvmrp-v3-03.txt>, or
    <draft-ietf-idmr-dvmrp-v3-03.ps>

7.1.1 Physical packet on link 3, it would be discarded by Router C since
the packet would not be arriving on Router C's parent link for this
source.

6.2.1.2 RPB: Benefits and Tunnel Interfaces Limitations

The ports of a DVMRP router may be either a physical interface key benefit to reverse path broadcasting is that it is reasonably
efficient and easy to implement.  It does not require that the router
know about the entire spanning tree, nor does it require a
directly-attached subnetwork or a tunnel interface special
mechanism to another stop the forwarding process (as flooding does).  In
addition, it guarantees efficient delivery since multicast
island.  All interfaces packets
always follow the "shortest" path from the source station to the
destination group.  Finally, the packets are configured with distributed over multiple
links, resulting in better network utilization since a metric that specifies the
cost different tree is
computed for each source.

One of the given port and a TTL threshold major limitations of the RPB algorithm is that limits it does not
take into account multicast group membership when building the scope of delivery
tree for a
multicast transmission.  In addition, each tunnel interface must source.  As a result, datagrams may be
explicitly configured with two additional parameters: unnecessarily
forwarded onto subnetworks that have no members in a destination group.

6.2.2 Truncated Reverse Path Broadcasting (TRPB)

Truncated Reverse Path Broadcasting (TRPB) was developed to overcome the IP address
limitations of Reverse Path Broadcasting.  With information provided by
IGMP, multicast routers determine the local router's interface group memberships on each leaf
subnetwork and the IP address avoid forwarding datagrams onto a leaf subnetwork if it
does not contain at least one member of a given destination group.  Thus,
the remote router's
interface.

========================================================================

   TTL                                Scope
Threshold
________________________________________________________________________
    0                                Restricted to the same host
    1                                Restricted to delivery tree is "truncated" by the same router if a leaf subnetwork
   15                                Restricted to the same site
   63                                Restricted to has
no group members.

Figure 9 illustrates the operation of TRPB algorithm.  In this example
the same region
  127                                Worldwide
  191                                Worldwide; limited bandwidth
  255                                Unrestricted in scope

Table 1:   TTL Scope Control Values
========================================================================

A multicast router will only forward receives a multicast packet on its parent link for the
Source.  The router forwards the datagram across an on interface if 1 since that
interface has at least one member of G1.  The router does not forward
the TTL field datagram to interface 3 since this interface has no members in the IP header
destination group.  The datagram is greater than the TTL
threshold assigned forwarded on interface 4 if and only
if a downstream router considers this subnetwork to be part of its
"parent link" for the interface.  Table 1 lists Source.

======================================================================

                            Source
                                |   :
                                    :
                                |   :     (Source, G1)
                                    v
                                |
                           "parent link"
                                |
              "child link"     ___
          G1            _______|2|_____
           \           |               |
         G3\\ _____   ___    ROUTER   ___      ______ / G2
            \| hub |--|1|             |3|-----|switch|/
            /|_____|  ^--     ___     --^     |______|\
            /        ^ |______|4|_____|  ^            \
          G1        ^       .^---         ^            G3
                   ^      .^   |           ^
                  ^     .^  "child link"    ^
                 Forward       |             Truncate

Figure 9: Truncated Reverse Path Broadcasting - (TRPB)
======================================================================

TRPB removes some limitations of RPB but it solves only part of the conven- tional
TTL values that are used to restrict
problem.  It eliminates unnecessary traffic on leaf subnetworks but it
does not consider group memberships when building the scope branches of the
delivery tree.

6.2.3 Reverse Path Multicasting (RPM)

Reverse Path Multicasting (RPM) is an IP multicast.  For
example, enhancement to Reverse Path
Broadcasting and Truncated Reverse Path Broadcasting.

RPM creates a multicast datagram delivery tree that spans only:

    o  Subnetworks with a TTL of less than 16 is restricted group members, and

    o  Routers and subnetworks along the shortest
       path to subnetworks with group members.

RPM allows the source-based "shortest-path" tree to the same site and should not be pruned so that
datagrams are only forwarded across an interface along branches that lead to
other sites in the same region.

TTL-based scoping is not always useful, so the IETF MBoneD working group
is considering the definition and usage of a range active members
of multi- cast
addresses for "administrative" scoping.  In other words, such addresses
would be usable within the destination group.

6.2.3.1 Operation

When a certain administrative scope, multicast router receives a corporate
network, packet for instance, but would not be a (source, group) pair,
the first packet is forwarded following the TRPB algorithm across all

routers in the global
MBone.  At internetwork.  Routers on the moment, edge of the range from 239.0.0.0 through 239.255.255.255
is being reserved for administratively scoped applications, but network (which
have only leaf subnetworks) are called leaf routers.  The TRPB algorithm
guarantees that each leaf router will receive at least the
structure and usage first
multicast packet.  If there is a group member on one of this block has yet to be completely formalized.

7.1.2 Basic Operation

DVMRP implements its leaf
subnetworks, a leaf router forwards the packet based on this group
membership information.

========================================================================

                   Source
                      | :
                      | : (Source, G)
                      | v
                      |
                      |
                    o-#-G
                      |**********
                    ^ |         *
                    , |         *
                    ^ |         *  o
                    , |         * /
                    o-#-o       #***********
                    ^ |\      ^ |\         *
                    ^ | o     ^ | G        *
                    , |       , |          *
                    ^ |       ^ |          *
                    , |       , |          *
                      #         #          #
                     /|\       /|\        /|\
                    o o o     o o o      G o G
LEGEND

 #    Router
 o    Leaf without group member
 G    Leaf with group member
***   Active Branch
---   Pruned Branch
,>,   Prune Message (direction of flow -->

Figure 10: Reverse Path Multicasting  (RPM) algorithm.
According to RPM, the first datagram for any (source, group) pair is
forwarded across the entire internetwork (providing
========================================================================

If none of the packet's TTL and
router interface thresholds permit this).  The initial datagram is
delivered subnetworks connected to all leaf routers which transmit prune messages back toward the source if there are no group members on their directly attached leaf
subnetworks.  The prune messages result in the removal of branches from
the tree that do not lead to router contain group
members, thus creating a source-based
shortest path tree with all leaves having group members. After a period
of time, the pruned branches grow back and the next datagram for the
(source, group) pair is forwarded across the entire internetwork
resulting in a new set of prune messages.

DVMRP also implements a mechanism to quickly "graft" back a previously
pruned branch of a group's delivery tree.  If a leaf router that previously
sent may transmit a prune "prune" message for a (source, group) pair discovers new group
members on a leaf network, it sends a graft message to its parent
link, informing the group's
previous-hop router.  When an upstream router receives a graft message, that it cancels out the previously-received prune message.  Graft messages
will cascade back toward the source (until reaching the nearest "live"
branch point should not forward packets
for this particular (source, group) pair on the delivery tree), thus allowing previously pruned
branches to be quickly restored as part of the active delivery tree.

7.1.3 DVMRP Router Functions

When there is more than one DVMRP router child interface on a subnetwork, which
it received the Dominant
Router is responsible for prune message.  Prune messages are sent just one hop
back toward the periodic transmission of IGMP Host
Membership Query messages.  Upon initialization, a DVMRP source.

An upstream router
considers itself receiving a prune message is required to be store the Dominant Router for
prune information in memory.  If the subnetwork until it
receives a Host Membership Query message upstream router has no recipients
on local leaf subnetworks and has received prune messages from a each
downstream neighbor router with a
lower IP address.  Figure 12 illustrates how the router with the lowest
IP address functions as on each of the Dominant Router child interfaces for this (source,
group) pair, then the subnetwork.

In order upstream router does not need to avoid duplicate multicast datagrams when there is more than
one DVMRP receive
additional packets for the (source, group) pair.  This implies that the
upstream router on can also generate a subnetwork, prune message of its own, one router is elected hop
further back toward the Dominant
Router for source.  This cascade of prune messages results
in an active multicast delivery tree, consisting exclusively of "live"
branches (i.e., branches that lead to active receivers).

Since both the particular source subnetwork (see fig. 12).  In  Figure
13, Router C is downstream group membership and may potentially receive datagrams from internetwork topology can change
dynamically, the source subnetwork from Router A or Router B.  If Router A's metric
to pruned state of the source subnetwork is less than Router B's metric, then Router A
is dominant over Router B for this source.

This means that Router A will forward traffic multicast delivery tree must be
refreshed periodically.  At regular intervals, the prune information
expires from the source sub-
network memory of all routers and Router B will discard traffic from that source subnet- work.
 However, if Router A's metric the next packet for the
(source, group) pair is equal forwarded toward all downstream routers.  This
allows "stale state" (prune state for groups that are no longer active)
to Router B's metric, then
router with be reclaimed by the lower IP address on its downstream interface (child
link) becomes multicast routers.

6.2.3.2 Limitations

Despite the Dominant Router for this source.  Note improvements offered by the RPM algorithm, there are still
several scaling issues that on a
subnetwork with multiple routers forwarding need to groups with multiple
sources, different routers may be dominant for each source.

========================================================================

        _____________                               _____________
       | Router A    |                             | Router B    |
       |             |                             |      DR     |
        -------------                               -------------
   128.2.3.4  |                                  <-Query  |  128.2.1.1
              |                                           |
 ---------------------------------------------------------------------
                                       |
                          128.2.3.1    |
                                 _____________
                                | Router C    |
                                |             |
                                 -------------

Figure 12. DVMRP Dominant Router Election
========================================================================

========================================================================

                                   To
              .-<-<-<-<-<-<-Source Subnetwork->->->->->->->->--.
              v                                                v
              |                                                |
          parent link                                      parent link
              |                                                |
        _____________                                    _____________
       | Router A    |                                  | Router B    |
       |             |                                  |             |
        -------------                                    -------------
              |                                                |
         child link                                       child link
              |                                                |
 ---------------------------------------------------------------------
                                       |
                                  parent link
                                       |
                                 _____________
                                | Router C    |
                                |             |
                                 -------------
                                       |
                                  child link
                                       |

Figure 13. DVMRP Dominant Router in a Redundant Topology
========================================================================

7.1.4 DVMRP Routing Table addressed when attempting to
develop an Internet-wide delivery service.  The DVMRP process first limitation is that
multicast packets must be periodically exchanges routing table updates with its
DVMRP neighbors.  These updates are logically independent flooded across every router in
the internetwork, onto every leaf subnetwork.  This flooding is wasteful
of those
generated by any unicast Interior Gateway Protocol.

Since bandwidth (until the DVMRP was developed to route multicast updated prune state is constructed).

This "flood and prune" paradigm is very powerful, but it wastes
bandwidth and does not unicast
traffic, a scale well, especially if there are receivers at
the edge of the delivery tree which are connected via low-speed
technologies (e.g., ISDN or modem).  Also, note that every router will probably run multiple routing processes
participating in
practice:  One to support the RPM algorithm must either have a forwarding of unicast traffic and another
to support table
entry for a (source, group) pair, or have prune state information for
that (source, group) pair.

It is clearly wasteful (especially as the forwarding number of multicast traffic. (This can be convenient:
A router can be configured active sources and
groups increase) to only route multicast IP, with no unicast
IP routing.  This may be place such a useful capability in firewalled
environments.)

Consider Figure 13:  There are two types of burden on routers in this figure:
dominant and subordinate; assume in this example that Router B is
dominant, Router A is subordinate, are not on every
(or perhaps any) active delivery tree.  Shared tree techniques are an
attempt to address these scaling issues, which become quite acute when
most groups' senders and Router C is part of receivers are sparsely distributed across the
downstream distribution tree.  In general,
internetwork.

6.3 Shared Tree Techniques

The most recent additions to the set of multicast forwarding techniques
are based on a shared delivery tree.  Unlike shortest-path tree
algorithms which routers are dominant
or subordinate may be different build a source-based tree for each source!  A subordinate router
is one source, or each
(source, group) pair, shared tree algorithms construct a single delivery
tree that is NOT on the shortest path tree back toward shared by all members of a source. group.  The
dominant router can tell this because shared tree approach
is quite similar to the subordinate router will
'poison-reverse' spanning tree algorithm except it allows the route

definition of a different shared tree for this source in its routing updates which
are each group.  Stations wishing
to receive traffic for a multicast group must explicitly join the shared
delivery tree.  Multicast traffic for each group is sent on and received
over the common LAN (i.e., Router same delivery tree, regardless of the source.

6.3.1 Operation

A sets shared tree may involve a single router, or set of routers, which
comprise(s) the metric "core" of a multicast delivery tree.  Figure 11
illustrates how a single multicast delivery tree is shared by all
sources and receivers for this
source a multicast group.

========================================================================

               Source        Source        Source
                  |             |             |
                  |             |             |
                  v             v             v

                 [#] * * * * * [#] * * * * * [#]
                                *
                  ^             *             ^
                  |             *             |
             join |             *             | join
                  |            [#]            |
                 [x]                         [x]
                  :                           :
                member                      member
                 host                        host

LEGEND

[#]  Shared Tree "Core" Routers
* *  Shared Tree Backbone
[x]  Member-hosts' directly-attached routers

Figure 11: Shared Multicast Delivery Tree

========================================================================

Similar to 'infinity').  The dominant router keeps track other multicast forwarding algorithms, shared tree algorithms
do not require that the source of subordinate
routers on a per-source basis...it never needs or expects to receive multicast packet be a
prune message from member of a subordinate router.  Only routers that are truly on
the downstream distribution tree will ever need
destination group in order to send prunes to the
dominant router.  If a dominant group.

6.3.2 Benefits

In terms of scalability, shared tree techniques have several advantages

over source-based trees.  Shared tree algorithms make efficient use of
router on a LAN has received either resources since they only require a
poison-reversed route router to maintain state
information for a each group, not for each source, or prunes for all groups emanating
from each (source,
group) pair. (Remember that source subnetwork, then it may itself send source-based tree techniques required all
routers in an internetwork to either a) be on the delivery tree for a
given source or (source, group) pair, or b) to have prune upstream
toward state for
that source or (source, group) pair:  So the entire internetwork must
participate in the source-based tree protocol.)  This improves the
scalability of applications with many active senders since the number of
source (assuming also stations is no longer a scaling issue.  Also, shared tree
algorithms conserve network bandwidth since they do not require that IGMP has told it
multicast packets be periodically flooded across all multicast routers
in the internetwork onto every leaf subnetwork.  This can offer
significant bandwidth savings, especially across low-bandwidth WAN
links, and when receivers sparsely populate the domain of operation.
Finally, since receivers are required to explicitly join the shared
delivery tree, data only ever flows over those links that lead to active
receivers.

6.3.3 Limitations

Despite these benefits, there are no
local receivers for any group from this source).

A sample routing table for still several limitations to protocols
that are based on a DVMRP router is shown shared tree algorithm.  Shared trees may result in Figure 14. Unlike

========================================================================

Source      Subnet     From           Metric   Status   TTL
 Prefix      Mask       Gateway

128.1.0.0  255.255.0.0  128.7.5.2       3        Up     200
128.2.0.0  255.255.0.0  128.7.5.2       5        Up     150
128.3.0.0  255.255.0.0  128.6.3.1       2        Up     150
128.3.0.0  255.255.0.0  128.6.3.1       4        Up     200

Figure 14: DVMRP Routing Table
========================================================================
traffic concentration and bottlenecks near core routers since traffic
from all sources traverses the table that would be created by a unicast routing protocol such same set of links as it approaches the RIP, OSPF, or the BGP,
core.  In addition, a single shared delivery tree may create suboptimal
routes (a shortest path between the DVMRP routing table contains Source
Prefixes and From-Gateways instead of Destination Prefixes source and Next-Hop
Gateways.

The routing table represents the shared tree, a
suboptimal path across the shared tree, a shortest path (source-based) spanning between the
egress core router and the receiver's directly attached router)
resulting in increased delay which may be a critical issue for some
multimedia applications.  (Simulations indicate that latency over a
shared tree may be approximately 10% larger than source-based trees in
many cases, but by the same token, this may be negligible for many
applications.)  Finally, expanding-ring searches are not supported
inside shared-tree domains.

7. "DENSE MODE" ROUTING PROTOCOLS

Certain multicast routing protocols are designed to work well in
environments that have plentiful bandwidth and where it is reasonable
to assume that receivers are rather densely distributed.  In such
scenarios, it is very reasonable to every possible source prefix in the internetwork--the Reverse
Path Broadcasting (RPB) tree.  The DVMRP routing table does not
represent group membership use periodic flooding, or received prune messages.

The key elements in DVMRP other
bandwidth-intensive techniques that would not necessarily be very
scalable over a wide-area network.  In section 8, we will examine
different protocols that are specifically geared toward efficient WAN
operation, especially for groups that have widely dispersed (i.e.,
sparse) membership.

These routing table include protocols include:

o  Distance Vector Multicast Routing Protocol (DVMRP),

o  Multicast Extensions to Open Shortest Path First (MOSPF),

o  Protocol Independent Multicast - Dense Mode (PIM-DM).

These protocols' underlying designs assume that the following items:

Source Prefix          A subnetwork which is a potential or actual
                       source amount of protocol
overhead (in terms of multicast datagrams.

Subnet Mask            The subnet mask associated with the Source
                       Prefix.  Note amount of state that the DVMRP provides the subnet
                       mask for must be maintained by
each source subnetwork (in other words, router, the DVMRP is classless).

From-Gateway           The previous-hop number of router leading back toward a
                       particular Source Prefix.

TTL                    The time-to-live is used for table management CPU cycles required, and indicates the number amount of seconds before an
                       entry
bandwidth consumed by protocol operation) is removed from the routing table.  This
                       TTL has nothing at all to do with the TTL used
                       in TTL-based scoping.

7.1.5 DVMRP Forwarding Table

Since appropriate since receivers
densely populate the DVMRP routing table is not aware area of group membership, the
DVMRP process builds a forwarding table based on operation.

7.1. Distance Vector Multicast Routing Protocol (DVMRP)

The Distance Vector Multicast Routing Protocol (DVMRP) is a combination of the
information contained in the multicast distance-
vector routing table, known groups, and
received prune messages.  The forwarding table represents protocol designed to support the local
router's understanding forwarding of the shortest path multicast
datagrams through an internetwork.  DVMRP constructs source-based
multicast delivery tree
for each (source, group) pair--the trees using the Reverse Path Multicasting (RPM) tree.

========================================================================

Source      Multicast     TTL   InPort   OutPorts
 Prefix      Group

 128.1.0.0  224.1.1.1     200    1 Pr      2p3p
            224.2.2.2     100    1         2p3
            224.3.3.3     250    1         2
 128.2.0.0  224.1.1.1     150    2         2p3

Figure 15: DVMRP Forwarding Table
========================================================================

The forwarding table for a sample
algorithm.  Originally, the entire MBone ran only DVMRP.  Today, over
half of the MBone routers still run some version of DVMRP.

DVMRP router is shown was first defined in Figure 15. RFC-1075.  The elements in this display include original specification was
derived from the following items:

Source Prefix           The subnetwork sending multicast datagrams
                        to Routing Information Protocol (RIP) and employed the specified groups (one group per row).

Multicast Group
Truncated Reverse Path Broadcasting (TRPB) technique. The Class D IP address to which multicast
                        datagrams are addressed.  Note major
difference between RIP and DVMRP is that RIP calculates the next-hop
toward a given
                        Source Prefix may contain sources for several
                        Multicast Groups.

InPort                  The parent port for destination, while DVMRP computes the (source, group) pair.
                        A 'Pr' in this column indicates that previous-hop back toward
a prune
                        message source.  Since mrouted 3.0, DVMRP has been sent to employed the upstream router
                        (the From-Gateway for this Source Prefix Reverse Path
Multicasting (RPM) algorithm.  Thus, the latest implementations of DVMRP
are quite different from the original RFC specification in many regards.
There is an active effort within the IETF Inter-Domain Multicast Routing
(IDMR) working group to specify DVMRP routing table).

OutPorts                The child ports over which multicast datagrams
                        for this (source, group) pair are forwarded.
                        A 'p' version 3 in this column indicates that the router
                        has received a prune message(s) from a (all)
                        downstream router(s) on this port.

7.1.6 Hierarchical standard form.

The current DVMRP (DVMRP v4.0) v3 Internet-Draft is:

    <draft-ietf-idmr-dvmrp-v3-04.txt>, or
    <draft-ietf-idmr-dvmrp-v3-04.ps>

7.1.1 Physical and Tunnel Interfaces

The rapid growth ports of the MBone is placing ever-increasing demands on its
routers.  Essentially, today's MBone is deployed as a single, "flat"
routing domain where each DVMRP router is required to maintain detailed
routing information may be either a physical interface to every possible a
directly-attached subnetwork on or a tunnel interface to another multicast-
capable island.  All interfaces are configured with a metric specifying
cost for the MBone.  As given port, and a TTL threshold that limits the
number scope of subnetworks continues to increase, the size a
multicast transmission.  In addition, each tunnel interface must be
explicitly configured with two additional parameters:  The IP address of
the routing
tables local router's tunnel interface and the IP address of the periodic update messages will continue remote
router's interface.

========================================================================

   TTL                                Scope
Threshold
________________________________________________________________________
    0                                Restricted to grow.  If
nothing is done about these issues, the processing and memory
capabilities of the MBone routers will eventually be depleted and
routing on same host
    1                                Restricted to the MBone will be degraded, or fail.

To overcome these potential scaling issues, a hierarchical version of same subnetwork
   15                                Restricted to the DVMRP is under development.  In hierarchical routing, same site
   63                                Restricted to the MBone
would be divided into same region
  127                                Worldwide
  191                                Worldwide; limited bandwidth
  255                                Unrestricted in scope

Table 1:   TTL Scope Control Values
========================================================================

A multicast router will only forward a number of individual routing domains.  Each
routing domain executes its own instance of an "intra-domain" multicast
routing protocol.  Another protocol, or another instance of datagram across an
interface if the same
protocol, would be used for routing between TTL field in the individual domains.

7.1.6.1 Benefits of Hierarchical Multicast Routing

Hierarchical routing reduces IP header is greater than the demand for router resources because
each router only needs TTL
threshold assigned to know the explicit details about routing
packets to destinations within its own domain, but needs interface.  Table 1 lists the conventional
TTL values that are used to know little
or nothing about restrict the detailed topological structure scope of any an IP multicast.  For
example, a multicast datagram with a TTL of the other
domains.  The protocol running between the domains less than 16 is envisioned restricted
to
maintain information about the interconnection of the domains, but same site and should not
about be forwarded across an interface to
other sites in the internal topology same region.

TTL-based scoping is not always sufficient for all applications.
Conflicts arise when trying to simultaneously enforce limits on
topology, geography, and bandwidth.  In particular, TTL-based scoping
cannot handle overlapping regions, which is a necessary characteristic
of each domain.

========================================================================
                                                      _________
                      ________       _________       /         \
                     /        \     /         \     | Region D  |
     ___________     |Region B|-L2-|           |-L2-\___________/
    /           \-L2-\________/    |           |      ___________
   |             |     |    |      |           |     /           \
   | Region A    |    L2   L2      | Region C  |-L2-| Region E    |
   |             |     |    |      |           |    |             |
    \___________/     ________     |           |    \_____________/
                     /        \-L2-|           |
                     |Region F|    \___________/
                     \________/

Figure 16. Hierarchical DVMRP
======================================================================== administrative regions.  In addition to reducing the amount light of routing information, there are
several other benefits these issues, "administrative"
scoping was created in 1994, to provide a way to do scoping based on
multicast address.  Certain addresses would be gained from the development and deployment
of usable within a given
administrative scope (e.g., a hierarchical version of the DVMRP:

    o  Different multicast routing protocols may corporate internetwork) but would not be deployed
       in each region of
forwarded onto the global MBone.  This permits the testing allows for privacy, and deployment of new protocols on a domain-by-domain
       basis.

    o  The effects of an individual link or router failures
       are limited to only those routers operating address
reuse within a
       single domain. Likewise, the effects of any change class D address space.  The range from 239.0.0.0 to
       the topological interconnection of regions is
239.255.255.255 has been reserved for administrative scoping.  While
administrative scoping has been in limited use since 1994 or so, it has
yet to only inter-domain routers.  These enhancements are
       especially important when deploying a distance-vector
       routing protocol which can result in relatively long
       convergence times.

    o be widely deployed.  The count-to-infinity problem associated with distance-
       vector routing protocols places limitations IETF MBoneD working group is working on
the
       maximum diameter deployment of administrative scoping.  For additional information,
please see <draft-ietf-mboned-admin-ip-space-01.txt> or its successor,
entitled "Administratively Scoped IP Multicast."

7.1.2 Basic Operation

DVMRP implements the MBone topology.  Hierarchical
       routing limits these diameter constraints to a single
       domain, instead of Reverse Path Multicasting (RPM) algorithm.
According to RPM, the first datagram for any (source, group) pair is
forwarded across the entire MBone.

7.1.6.2 Hierarchical Architecture

Hierarchical DVMRP proposes internetwork (providing the creation of non-intersecting regions
where each region has a unique Region-ID.  The packet's TTL and
router interface thresholds permit this).  Upon receiving this traffic,
leaf routers internal to a
region execute any multicast routing protocols such as DVMRP, MOSPF,
PIM, or CBT as a "Level 1" (L1) protocol.  Each region is required to
have at least one "boundary router" which is responsible for providing

inter-regional connectivity. may transmit prune messages back toward the source if there
are no group members on their directly-attached leaf subnetworks.  The boundary routers execute DVMRP as a
"Level 2" (L2) protocol

prune messages remove all branches that do not lead to forward traffic between regions.

The L2 routers exchange routing information in group members
from the form of Region-IDs
instead tree, leaving a source-based shortest path tree.

After a period of time, the individual subnetwork prefixes contained within prune state for each
region.  With DVMRP as the L2 protocol, the inter-regional multicast
delivery tree is constructed based on the (region_ID, (source, group) pair rather
than
expires to reclaim stale prune state (from groups that are no longer in
use).  If those groups are actually still in use, a subsequent datagram
for the usual (source, group) pair.

When a multicast packet originates within pair will be flooded across all downstream
routers.  This flooding will result in a region, it is forwarded
according new set of prune messages,
serving to regenerate the L1 protocol to all subnetworks containing group
members. source-based shortest-path tree for this
(source, group) pair.  In addition, the datagram is forwarded to each current implementations of RPM (notably
DVMRP), prune messages are not reliably transmitted, so the boundary
routers (L2) configured for the source region.  The L2 routers tag the
packet with the Region-Id and place it inside an encapsulation header prune
lifetime must be kept short to compensate for delivery lost prune messages.

DVMRP also implements a mechanism to other regions.  When the packet arrives at quickly "graft" back a remote
region, the encapsulation header is removed before previously
pruned branch of a group's delivery to tree.  If a router that had sent a
prune message for a (source, group) pair discovers new group members by the L1 routers.

7.2. Multicast Extensions on
a leaf network, it sends a graft message to OSPF (MOSPF)

Version 2 of the Open Shortest Path First (OSPF) routing protocol is
defined in RFC-1583.  It is previous-hop router for
this source.  When an Interior Gateway Protocol (IGP)
specifically designed to distribute unicast topology information among
routers belonging to upstream router receives a single Autonomous System.  OSPF is based graft message, it
cancels out the previously-received prune message.  Graft messages
cascade (reliably) hop-by-hop back toward the source until they reach
the nearest "live" branch point on
link-state algorithms which permit rapid route calculation with a
minimum of routing protocol traffic. the delivery tree.  In addition this way,
previously-pruned branches are quickly restored to efficient route
calculation, OSPF a given delivery
tree.

7.1.3 DVMRP Router Functions

In  Figure 13, Router C is an open standard that supports hierarchical
routing, load balancing, downstream and may potentially receive
datagrams from the import of external routing information.

The Multicast Extensions source subnetwork from Router A or Router B.  If
Router A's metric to OSPF (MOSPF) are defined in RFC-1584.  MOSPF
routers maintain a current image of the network topology through source subnetwork is less than Router B's
metric, then Router A is dominant over Router B for this source.

This means that Router A will forward any traffic from the
unicast OSPF link-state routing protocol.  MOSPF enhances source
subnetwork and Router B will discard traffic received from that source.
However, if Router A's metric is equal to Router B's metric, then the OSPF
protocol by providing
router with the ability to route multicast lower IP traffic.  The
multicast extensions to OSPF are built address on top of OSPF Version 2 so its downstream interface (child
link) becomes the Dominant Router for this source.  Note that on a multicast routing capability can be incrementally introduced into an
OSPF Version 2 routing domain.  The enhancements that have been added
are backwards-compatible so that
subnetwork with multiple routers running MOSPF will interoperate forwarding to groups with non-multicast OSPF multiple
sources, different routers when forwarding unicast IP data traffic.
Note that MOSPF, unlike DVMRP, does not provide support may be dominant for tunnels.

7.2.1 Intra-Area Routing with MOSPF

Intra-Area each source.

7.1.4 DVMRP Routing describes the basic Table

The DVMRP process periodically exchanges routing algorithm employed table updates with its
DVMRP neighbors.  These updates are logically independent of those
generated by
MOSPF.  This elementary algorithm runs inside a single OSPF area and
supports any unicast Interior Gateway Protocol.

Since the DVMRP was developed to route multicast forwarding when a source and all destination group
members reside in the same OSPF area, or when the entire OSPF Autonomous
System is not unicast
traffic, a single area.  The following discussion assumes that the
reader is familiar with the basic operation of the OSPF router will probably run multiple routing
protocol.

7.2.1.1 Local Group Database

Similar processes in
practice:  One to support the DVMRP, MOSPF routers use the Internet Group Management
Protocol (IGMP) forwarding of unicast traffic and another
to monitor support the forwarding of multicast group membership on directly-
attached subnetworks.  MOSPF routers are required traffic. (This can be convenient:
A router can be configured to implement only route multicast IP, with no unicast

========================================================================

                                   To
              .-<-<-<-<-<-<-Source Subnetwork->->->->->->->->--.
              v                                                v
              |                                                |
          parent link                                      parent link
              |                                                |
        _____________                                    _____________
       | Router A    |                                  | Router B    |
       |             |                                  |             |
        -------------                                    -------------
              |                                                |
         child link                                       child link
              |                                                |
 ---------------------------------------------------------------------
                                       |
                                  parent link
                                       |
                                 _____________
                                | Router C    |
                                |             |
                                 -------------
                                       |
                                  child link
                                       |

Figure 12. DVMRP Dominant Router in a "local
group database" which maintains Redundant Topology
========================================================================

IP routing.  This may be a list useful capability in firewalled
environments.)

Again, consider Figure 12:  There are two types of directly attached groups routers in this
figure:  dominant and
determines the local router's responsibility for delivering multicast
datagrams to these groups.

On any given subnetwork, the transmission of IGMP Host Membership
Queries is performed solely by the Designated subordinate; assume in this example that Router (DR).  Also, the
responsibility of listening to IGMP Host Membership Reports B
is performed
only by the Designated dominant, Router (DR) A is subordinate, and the Backup Designated Router
(BDR).  This means that in a mixed environment containing both MOSPF and
OSPF routers, an MOSPF router must be elected the DR for C is part of the subnetwork
if IGMP Queries
downstream distribution tree.  In general, which routers are to be generated.  This can dominant
or subordinate may be achieved by simply
assigning all non-MOSPF routers different for each source!  A subordinate router
is one that is NOT on the shortest path tree back toward a RouterPriority of 0 to prevent them
from becoming source.  The
dominant router can tell this because the DR or BDR, thus allowing an MOSPF subordinate router to become will
'poison-reverse' the
DR route for this source in its routing updates which
are sent on the subnetwork.

The DR is responsible common LAN (i.e., Router A sets the metric for communicating group membership information this
source to
all other routers in the OSPF area by flooding Group-Membership LSAs. 'infinity').  The DR originates dominant router keeps track of subordinate
routers on a separate Group-Membership LSA for each multicast
group having one per-source basis...it never needs or more entries in the DR's local group database.
Similar expects to Router-LSAs and Network-LSAs, Group-Membership LSAs are
flooded throughout receive a single area only.  This ensures
prune message from a subordinate router.  Only routers that all remotely-
originated multicast datagrams are forwarded to truly on
the specified subnetwork
for downstream distribution to local group members.

7.2.1.2 Datagram's Shortest Path Tree

The datagram's shortest path tree describes will ever need to send prunes to the path taken by
dominant router.  If a
multicast datagram as it travels through the internetwork dominant router on a LAN has received either a
poison-reversed route for a source, or prunes for all groups emanating
from the that source subnetwork to each of subnetwork, then it may itself send a prune upstream

toward the individual source (assuming also that IGMP has told it that there are no
local receivers for any group members.  The shortest
path tree from this source).

A sample routing table for each (source, group) pair is built "on demand" when a DVMRP router receives the first multicast datagram for a particular (source,
group) pair.

When the initial datagram arrives, the source subnetwork is located shown in Figure 13.  Unlike

========================================================================

    Source      Subnet     From           Metric   Status   TTL
     Prefix      Mask       Gateway

    128.1.0.0  255.255.0.0  128.7.5.2       3        Up     200
    128.2.0.0  255.255.0.0  128.7.5.2       5        Up     150
    128.3.0.0  255.255.0.0  128.6.3.1       2        Up     150
    128.3.0.0  255.255.0.0  128.6.3.1       4        Up     200

Figure 13: DVMRP Routing Table
========================================================================

the MOSPF link state database.  The MOSPF link state database is simply table that would be created by a unicast routing protocol such as
the standard OSPF link state database with RIP, OSPF, or the addition of Group-
Membership LSAs.  Based on BGP, the Router- DVMRP routing table contains Source
Prefixes and From-Gateways instead of Destination Prefixes and Network-LSAs in the
MOSPF link state database, a source-based shortest-path tree is
constructed using Dijkstra's algorithm.  After Next-Hop
Gateways.

The routing table represents the shortest path (source-based) spanning
tree is built, Group-
Membership LSAs are used to prune those branches that do every possible source prefix in the internetwork--the Reverse
Path Broadcasting (RPB) tree.  The DVMRP routing table does not lead to
subnetworks containing members of this group.
represent group membership or received prune messages.

The output of these
algorithms key elements in DVMRP routing table include the following items:

Source Prefix          A subnetwork which is a pruned source-based tree rooted at the datagram's
source.

To forward multicast datagrams to downstream members potential or actual
                       source of a group, each
router must determine its position in multicast datagrams.

Subnet Mask            The subnet mask associated with the datagram's shortest path

tree.  Assume Source
                       Prefix.  Note that Figure 17 illustrates the shortest path tree DVMRP provides the subnet
                       mask for each source subnetwork (in other words,
                       the DVMRP is classless).

From-Gateway           The previous-hop router leading back toward a
                       particular (source, group) pair.  Router E's upstream node Source Prefix.

TTL                    The time-to-live is

========================================================================

                      S
                      |
                      |
                   A  #
                     / \
                    /   \
                   1     2
                  /       \
               B #         # C
                / \         \
               /   \         \
              3     4         5
             /       \         \
          D #         # E       # F
                     / \         \
                    /   \         \
                   6     7         8
                  /       \         \
               G #         # H       # I

LEGEND

 #   Router

Figure 17. Shortest Path Tree used for a (S, G) pair
========================================================================

Router B and there are two downstream interfaces:  one connecting to
Subnetwork 6 table management
                       and another connecting to Subnetwork 7.

Note indicates the following properties number of seconds before an
                       entry is removed from the basic MOSPF routing algorithm:

    o  For a given multicast datagram, table.  This
                       TTL has nothing at all routers within an OSPF
       area calculate the same source-based shortest path delivery
       tree.  Tie-breakers have been defined to guarantee that if
       several equal-cost paths exist, all routers agree do with the TTL used
                       in TTL-based scoping.

7.1.5 DVMRP Forwarding Table

Since the DVMRP routing table is not aware of group membership, the
DVMRP process builds a forwarding table based on a single
       path through combination of the area.  Unlike unicast OSPF, MOSPF does not
       support
information contained in the concept multicast routing table, known groups, and
received prune messages.  The forwarding table represents the local
router's understanding of equal-cost multipath routing.

    o  Synchronized link state databases containing Group-Membership
       LSAs allow an MOSPF router to effectively perform the shortest path source-based delivery tree
for each (source, group) pair--the Reverse Path Multicasting (RPM) computation "in memory."   Unlike tree.

========================================================================

    Source      Multicast     TTL   InIntf    OutIntf(s)
     Prefix      Group

     128.1.0.0  224.1.1.1     200    1 Pr      2p3p
                224.2.2.2     100    1         2p3
                224.3.3.3     250    1         2
     128.2.0.0  224.1.1.1     150    2         2p3

Figure 14: DVMRP Forwarding Table
========================================================================

The forwarding table for a sample DVMRP router is shown in Figure 14.
The elements in this display include the
       DVMRP, following items:

Source Prefix           The subnetwork sending multicast datagrams
                        to the specified groups (one group per row).

Multicast Group         The Class D IP address to which multicast
                        datagrams are addressed.  Note that a given
                        Source Prefix may contain sources for several
                        Multicast Groups.

InIntf                  The parent interface for the (source, group)
                        pair.  A 'Pr' in this means column indicates that the first datagram of a new transmis-
       sion does not have to be flooded
                        prune message has been sent to all routers in an area.

    o  The "on demand" construction of the source-based delivery tree
       has upstream
                        router (the From-Gateway for this Source Prefix
                        in the benefit of spreading calculations DVMRP routing table).

OutIntf(s)              The child interfaces over time, resulting which multicast
                        datagrams for this (source, group) pair are
                        forwarded.  A 'p' in this column indicates
                        that the router has received a lesser impact for participating routers.  Of course, prune message(s)
                        from a (all) downstream router(s) on this
       may strain port.

7.2. Multicast Extensions to OSPF (MOSPF)

Version 2 of the CPU(s) Open Shortest Path First (OSPF) routing protocol is
defined in RFC-1583.  OSPF is an Interior Gateway Protocol (IGP) that
distributes unicast topology information among routers belonging to a router if many (source, group) pairs
       appear at about
single OSPF "Autonomous System."  OSPF is based on link-state algorithms
which permit rapid route calculation with a minimum of routing protocol
traffic.  In addition to efficient route calculation, OSPF is an open
standard that supports hierarchical routing, load balancing, and the same time, or if there
import of external routing information.

The Multicast Extensions to OSPF (MOSPF) are defined in RFC-1584.  MOSPF
routers maintain a lot current image of events
       which force the router network topology through the
unicast OSPF link-state routing protocol.  The multicast extensions to flush and rebuild its forwarding cache.
       In
OSPF are built on top of OSPF Version 2 so that a stable topology with long-lived multicast sessions, these
       effects should routing
capability can be minimal.

7.2.1.3 Forwarding Cache

Each incrementally introduced into an OSPF Version 2
routing domain.  Routers running MOSPF router makes its will interoperate with non-MOSPF
routers when forwarding decision based on unicast IP data traffic.  MOSPF does not support
tunnels.

7.2.1 Intra-Area Routing with MOSPF

Intra-Area Routing describes the contents of
its forwarding cache.  The basic routing algorithm employed by
MOSPF.  This elementary algorithm runs inside a single OSPF area and
supports multicast forwarding cache is built from the source-
based shortest-path tree for each (source, group) pair when a source and the router's
local all destination group database.  After the router discovers its position
members reside in the
shortest path tree, a forwarding cache entry is created containing the
(source, group) pair, the upstream interface, and the downstream
interface(s). At this point, all resources associated with same OSPF area, or when the creation
of entire OSPF Autonomous
System is a single area (and the tree are deleted.  From this point on, source is inside that area...).  The
following discussion assumes that the forwarding cache entry reader is used familiar with OSPF.

7.2.1.1 Local Group Database

Similar to quickly forward all subsequent datagrams from this source to
this group.

Figure 18 displays the forwarding cache for an example other multicast routing protocols, MOSPF router.
The elements in the display include routers use the following items:

Dest. Group            The destination group address to which matching
                       datagrams are forwarded.

Source                 The datagram's source host address.  Each (Dest.
                       Group, Source) pair uniquely identifies a
                       separate forwarding cache entry.

========================================================================

Dest.
Internet Group     Source       Upstream     Downstream   TTL

224.1.1.1       128.1.0.2      11          12   13      5
224.1.1.1       128.4.1.2      11          12   13      2
224.1.1.1       128.5.2.2      11          12   13      3
224.2.2.2       128.2.0.3      12          11           7

Figure 18: Management Protocol (IGMP) to monitor multicast group
membership on directly-attached subnetworks.  MOSPF Forwarding Cache
========================================================================

Upstream               The interface from which routers maintain a matching datagram
                       must be received

Downstream             The interface(s) over
"local group database" which a matching datagram
                       will be forwarded lists directly-attached groups and
determines the local router's responsibility for delivering multicast
datagrams to reach known Destination
                       group members

TTL                    The minimum number these groups.

On any given subnetwork, the transmission of hops a datagram will
                       travel to reach IGMP Host Membership
Queries is performed solely by the multicast group members.
                       This allows Designated Router (DR).  However,
the router responsibility of listening to discard datagrams
                       that do IGMP Host Membership Reports is
performed by not have only the Designated Router (DR) but also the Backup
Designated Router (BDR).  Therefore, in a high enough TTL mixed LAN containing both
MOSPF and OSPF routers, an MOSPF router must be elected the DR for the
subnetwork.  This can be achieved by setting the OSPF RouterPriority to reach a
                       certain group member.
zero in each non-MOSPF router to prevent them from becoming the (B)DR.

The DR is responsible for communicating group membership information to
all other routers in the forwarding cache is not aged or periodically
refreshed.  It is maintained as long as there OSPF area by flooding Group-Membership LSAs.
Similar to Router-LSAs and Network-LSAs, Group-Membership LSAs are system resources
available (e.g., memory) or until the next topology change.  In general, only
flooded within a single area.

7.2.1.2 Datagram's Shortest Path Tree

The datagram's shortest path tree describes the contents of path taken by a
multicast datagram as it travels through the forwarding cache will change when:

    o  The topology of area from the OSPF internetwork changes, forcing all source
subnetwork to each of the group members' subnetworks.  The shortest
path trees to be recalculated.  (Once tree for each (source, group) pair is built "on demand" when a
router receives the cache
       has been flushed, entries are not rebuilt until another packet first multicast datagram for one of a particular (source,
group) pair.

When the previous (Dest. Group, Source) pairs is
       received.)

    o  There initial datagram arrives, the source subnetwork is a change located in
the Group-Membership LSAs indicating that
       the distribution of individual group members has changed.

7.2.2 Mixing MOSPF and OSPF Routers link state database.  The MOSPF routers can be combined with non-multicast link state database is simply
the standard OSPF routers. This
permits link state database with the gradual deployment addition of MOSPF and allows experimentation with
multicast routing Group-
Membership LSAs.  Based on a limited scale.  When MOSPF the Router- and non-MOSPF routers
are mixed within an Autonomous System, all routers will interoperate Network-LSAs in the forwarding of unicast datagrams.

It is important to note that an MOSPF router is required to eliminate
all non-multicast OSPF routers when it builds its
link state database, a source-based shortest-
path delivery tree.  An MOSPF router can easily determine the multicast
capability of any other router based on shortest-path tree is constructed
using Dijkstra's algorithm.  After the setting of tree is built, Group-Membership
LSAs are used to prune the multicast-
capable bit (MC-bit) in tree such that the Options field only remaining branches
lead to subnetworks containing members of each router's link state
advertisements. this group.  The omission output of non-multicast routers can create
these algorithms is a
number of potential problems when forwarding multicast traffic:

    o  The Designated pruned source-based tree rooted at the datagram's
source.

========================================================================

                      S
                      |
                      |
                   A  #
                     / \
                    /   \
                   1     2
                  /       \
               B #         # C
                / \         \
               /   \         \
              3     4         5
             /       \         \
          D #         # E       # F
                     / \         \
                    /   \         \
                   6     7         8
                  /       \         \
               G #         # H       # I

LEGEND

 #   Router

Figure 15. Shortest Path Tree for a multi-access network must be an
       MOSPF router.  If a non-multicast OSPF router is elected the
       DR, the subnetwork will not be selected to (S, G) pair
========================================================================

To forward multicast datagrams since a non-multicast DR cannot generate Group-
       Membership LSAs for its subnetwork (because it is not running
       IGMP, so it won't hear IGMP Host Membership Reports).  To use
       MOSPF, it is a good idea to ensure that at least two downstream members of the
       MOSPF routers on each LAN have higher router_priority values
       than any non-MOSPF routers.  A possible strategy would be to
       configure any non-MOSPF routers with a router_priority of
       zero, so group, each
router must determine its position in the datagram's shortest path tree.
Assume that they cannot become (B)DR.

    o  Multicast datagrams may be forwarded along suboptimal routes
       since Figure 15 illustrates the shortest path between two points may require traversal
       of tree for a non-multicast OSPF router.

    o  Even though there given
(source, group) pair.  Router E's upstream node is unicast connectivity to a destination, Router B and there may not be multicast connectivity.  For example, the
       network may partition with respect
are two downstream interfaces:  one connecting to multicast connectivity
       since Subnetwork 6 and
another connecting to Subnetwork 7.

Note the only path between two points could require traversal following properties of a non-multicast-capable OSPF router. the basic MOSPF routing algorithm:

    o  The forwarding of  For a given multicast and unicast datagrams between
       two points may follow entirely different datagram, all routers within an OSPF
       area calculate the same source-based shortest path delivery
       tree.  Tie-breakers have been defined to guarantee that if
       several equal-cost paths exist, all routers agree on a single
       path through the
       internetwork.  This may make some routing problems a bit more
       challenging to debug.

7.2.3 Inter-Area Routing with area.  Unlike unicast OSPF, MOSPF

Inter-area routing involves the case where a datagram's source and some
of its destination group members reside in different OSPF areas.  It
should be noted that does not
       support the forwarding concept of multicast datagrams continues equal-cost multipath routing.

    o  Synchronized link state databases containing Group-Membership
       LSAs allow an MOSPF router to
be determined by the contents of the forwarding cache which is still
built build a source-based shortest-
       path tree in memory, working forward from the local group database and the datagram source-based trees.
The major differences are related source to the way that
       group membership
information is propagated and member(s).  Unlike the way DVMRP, this means that the inter-area source-based
tree is constructed.

7.2.3.1 Inter-Area Multicast Forwarders

In MOSPF, a subset first
       datagram of a new transmission does not have to be flooded to
       all routers in an area's Area Border Routers (ABRs) function as
"inter-area multicast forwarders."  An inter-area multicast forwarder is
responsible for area.

    o  The "on demand" construction of the forwarding source-based delivery tree
       has the benefit of group membership information and
multicast datagrams between areas. Configuration parameters determine
whether or not spreading calculations over time, resulting
       in a particular ABR also functions as an inter-area
multicast forwarder.

Inter-area multicast forwarders summarize their attached areas' group
membership information to lesser impact for participating routers.  Of course, this
       may strain the backbone by originating CPU(s) in a router if many new Group-
Membership LSAs into (source, group)
       pairs appear at about the same time, or if there are a lot of
       events which force the backbone area.  It is important MOSPF process to note that flush and rebuild its
       forwarding cache.  In a stable topology with long-lived
       multicast sessions, these effects should be minimal.

7.2.1.3 Forwarding Cache

Each MOSPF router makes its forwarding decision based on the summarization contents of group membership in
its forwarding cache.  Contrary to DVMRP, MOSPF forwarding is asymmetric.  This
means that group membership information from non-backbone areas not RPF-
based.  The forwarding cache is
flooded into the backbone.  However, group membership built from the backbone
or from other non-backbone areas is not flooded into any non-backbone
area(s).

To permit source-based shortest-
path tree for each (source, group) pair, and the forwarding of multicast traffic between areas, MOSPF
introduces router's local group
database.  After the concept of a "wild-card multicast receiver." A wild-card
multicast receiver is a router that receives all multicast traffic

generated discovers its position in an area, regardless of the multicast group membership.  In
non-backbone areas, all inter-area multicast forwarders operate as
wild-card multicast receivers.  This guarantees that all multicast
traffic originating in shortest path
tree, a non-backbone area forwarding cache entry is delivered to its inter-
area multicast forwarder, and then if necessary into created containing the backbone area.

========================================================================

                 -------------------------
                /      Backbone Area      \
                |                         |
                |      ^           ^      |
                |   ___|___     ___|___   |
                \__|       |___|       |__/
                   |---*---|   |---*---|
                       |           |
                    _______     _______
                   /       \   /       \
                   | Area  |   | Area  |
                   |   1   |   |   2   |
                   |-------|   |-------|

LEGEND

   ^
   |    Group Membership LSAs
 _____
|_____| Area Border Router (source, group)
pair, its expected upstream interface, and
        Inter-Area Multicast Forwarder

*       Wild-Card Multicast
        Receiver Interface

Figure 19. Inter-Area Routing Architecture
========================================================================

Since the backbone has group membership knowledge for necessary downstream
interface(s).  The forwarding cache entry is now used to quickly
forward all areas, subsequent datagrams from this source to this group.  If
a new source begins sending to a new group, MOSPF must first calculate
the
datagram distribution tree so that it may create a cache entry that can then be forwarded
used to group members residing in forward the
backbone and other non-backbone areas.  The backbone area does not
require wild-card multicast receivers because packet.

Figure 16 displays the routers forwarding cache for an example MOSPF router.
The elements in the
backbone area have complete knowledge of display include the following items:

Dest. Group            A known destination group address to which
                       datagrams are currently being forwarded, or to
                       which traffic was sent "recently" (i.e., since
                       the last topology or group membership information
for or other
                       event which (re-)initialized MOSPF's forwarding
                       cache.

Source                 The datagram's source host address.  Each (Dest.
                       Group, Source) pair uniquely identifies a
                       separate forwarding cache entry.

========================================================================

    Dest. Group     Source       Upstream     Downstream   TTL

    224.1.1.1       128.1.0.2      11          12   13      5
    224.1.1.1       128.4.1.2      11          12   13      2
    224.1.1.1       128.5.2.2      11          12   13      3
    224.2.2.2       128.2.0.3      12          11           7

Figure 16: MOSPF Forwarding Cache
========================================================================

Upstream               Datagrams matching this row's Dest. Group and
                       Source must be received on this interface.

Downstream             If a datagram matching this row's Dest. Group
                       and Source is received on the entire OSPF system.

7.2.3.2 Inter-Area Datagram Shortest-Path Tree

In correct Upstream
                       interface, then it is forwarded across the case listed
                       Downstream interfaces.

TTL                    The minimum number of inter-area multicast routing, it hops a datagram must cross
                       to reach any of the Dest. Group's members.  An
                       MOSPF router may discard a stem is a single area (and the source is inside that area...).  The
following discussion assumes that the reader is often impossible familiar with OSPF.

7.2.1.1 Local Group Database

Similar to
build a complete datagram shortest-path delivery tree.  Incomplete trees
are created because detailed topological and all other multicast routing protocols, MOSPF routers use the
Internet Group Management Protocol (IGMP) to monitor multicast group
membership
information on directly-attached subnetworks.  MOSPF routers maintain a
"local group database" which lists directly-attached groups and
determines the local router's responsibility for each OSPF area is not distributed between OSPF areas.
To overcome delivering multicast
datagrams to these limitations, topological estimates are made through groups.

On any given subnetwork, the use transmission of wild-card receivers and OSPF Summary-Links LSAs.

There are two cases that need to be considered when constructing an
inter-area shortest-path delivery tree.  The first involves the
condition when the source subnetwork IGMP Host Membership
Queries is located in performed solely by the same area as Designated Router (DR).  However,
the
router performing responsibility of listening to IGMP Host Membership Reports is
performed by not only the calculation.  The second situation occurs when Designated Router (DR) but also the

========================================================================

                 ----------------------------------
                |              S                   |
                |              |     Area 1        |
                |              |                   |
                |              #                   |
                |             / \                  |
                |            /   \                 |
                |           /     \                |
                |          /       \               |
                |       O-#         #-O            |
                |        / \         \             |
                |       /   \         \            |
                |      /     \         \           |
                |     /       \         \          |
                |  O-#         #         #-O       |
                |             / \         \        |
                |            /   \         \       |
                |           /     \         \      |
                |          /       \         \     |
                |       O-#         #-O       ---  |
                 ----------------------------| ? |-
                                              ---
                                               To
                                            Backbone
LEGEND

S   Source Subnetwork
O   Subnet Containing Group Members
#   Intra-Area MOSPF Backup
Designated Router
?   WildCard Multicast Receiver

Figure 20. Datagram Shortest Path Tree (Source (BDR).  Therefore, in Same Area)
========================================================================

source subnetwork a mixed LAN containing both
MOSPF and OSPF routers, an MOSPF router must be elected the DR for the
subnetwork.  This can be achieved by setting the OSPF RouterPriority to
zero in each non-MOSPF router to prevent them from becoming the (B)DR.

The DR is located responsible for communicating group membership information to
all other routers in a different the OSPF area than by flooding Group-Membership LSAs.
Similar to Router-LSAs and Network-LSAs, Group-Membership LSAs are only
flooded within a single area.

7.2.1.2 Datagram's Shortest Path Tree

The datagram's shortest path tree describes the router
performing path taken by a
multicast datagram as it travels through the calculation.

If area from the source
subnetwork to each of the group members' subnetworks.  The shortest
path tree for each (source, group) pair is built "on demand" when a
router receives the first multicast datagram resides for a particular (source,
group) pair.

When the initial datagram arrives, the source subnetwork is located in
the same area as MOSPF link state database.  The MOSPF link state database is simply
the
router performing standard OSPF link state database with the calculation, addition of Group-
Membership LSAs.  Based on the pruning process must be careful
to ensure that branches leading to other areas are not removed from Router- and Network-LSAs in the
tree.  Only those branches having no group members nor wild-card
multicast receivers are pruned.  Branches containing wild-card multicast
receivers must be retained since OSPF
link state database, a source-based shortest-path tree is constructed
using Dijkstra's algorithm.  After the local routers do not know if there tree is built, Group-Membership
LSAs are group used to prune the tree such that the only remaining branches
lead to subnetworks containing members residing in other areas. of this group.  The output of
these algorithms is a pruned source-based tree rooted at the datagram's
source.

========================================================================

                      S
                      |
                               #
                               |
                       Summary-Links LSA
                               |
                              ---
                 ------------| ? |-----------------
                |             ---    Area 1        |
                |              |                   |
                      |
                   A  #                   |
                |             / \                  |
                |
                     / \                 |
                |           /     \                |
                |
                    /   \               |
                |       O-#         #-O            |
                |
                   1     2
                  /       \         \             |
                |
               B #         # C
                / \         \            |
                |
               /   \         \           |
                |
              3     4         5
             /       \         \          |
                |  O-#
          D #         #-O       |
                |             / \         \        |
                |         # E       # F
                     / \         \       |
                |
                    /   \         \      |
                |
                   6     7         8
                  /       \         \     |
                |       O-#         #-O       #-O  |
                 ----------------------------------
               G #         # H       # I

LEGEND

S   Source Subnetwork
O   Subnet Containing Group Members

 #   Inter-Area MOSPF   Router
?   Intra-Area Multicast Forwarder

Figure 21. 15. Shortest Path Tree (Source in Different Area)
========================================================================

If the source of a multicast datagram resides in a different area than
the router performing the calculation, the details describing the local
topology surrounding the source station are not known.  However, this
information can be estimated using information provided by Summary-Links
LSAs for the source subnetwork.  In this case, the base of the tree
begins with branches directly connecting the source subnetwork to each
of the local area's inter-area multicast forwarders.  The inter-area
multicast forwarders must be included in the tree since any multicast
datagrams originating outside the local area will enter the area via an
inter-area multicast forwarder.

Since each inter-area multicast forwarder is also an ABR, it must
maintain a separate link state database for each attached area. This
means that each inter-area multicast forwarder is required (S, G) pair
========================================================================

To forward multicast datagrams to calculate downstream members of a separate forwarding tree for group, each of
router must determine its attached areas.  After position in the
individual trees are calculated, they are merged into a single
forwarding cache entry for datagram's shortest path tree.
Assume that Figure 15 illustrates the shortest path tree for a given
(source, group) pair pair.  Router E's upstream node is Router B and then the
individual trees there
are discarded.

7.2.4 Inter-Autonomous System Multicasting with MOSPF

Inter-Autonomous System multicasting involves the situation where a
datagram's source and at least some of its destination group members
reside in different OSPF Autonomous Systems.  It should be emphasized
that in OSPF terminology "inter-AS" communication also refers two downstream interfaces:  one connecting to
connectivity between an OSPF domain Subnetwork 6 and
another connecting to Subnetwork 7.

Note the following properties of the basic MOSPF routing domain which
could be algorithm:

    o  For a given multicast datagram, all routers within an OSPF
     is flooded into the same Autonomous System
backbone, but group membership from the perspective of an
Exterior Gateway Protocol. backbone (or from any other
non-backbone areas) is not flooded into any non-backbone area(s).

To facilitate inter-AS multicast routing, selected Autonomous System
Boundary Routers (ASBRs) are configured as "inter-AS permit the forwarding of multicast
forwarders." traffic between areas, MOSPF makes
introduces the assumption that each inter-AS concept of a "wild-card multicast
forwarder executes an inter-AS receiver."  A wild-card
multicast routing protocol (e.g., DVMRP)
which forwards receiver is a router that receives all multicast datagrams traffic
generated in a reverse path forwarding (RPF)
manner.  Each inter-AS an area.  In non-backbone areas, all inter-area multicast forwarder functions
forwarders operate as a wild-card multicast receiver receivers.  This guarantees
that all multicast traffic originating in each of any non-backbone area is
delivered to its attached areas.  This guarantees that
each inter-AS inter-area multicast forwarder remains on all pruned shortest-path
trees forwarder, and receives all multicast datagrams, regardless of then if necessary
into the multicast backbone area.  Since the backbone knows group membership.

Three cases need to membership for
all areas, the datagram can be considered when describing forwarded to the construction appropriate location(s)
in the OSPF autonomous system, if only it is forwarded into the backbone
by the source area's multicast ABR.

7.2.3.2 Inter-Area Datagram's Shortest-Path Tree

In the case of an
inter-AS inter-area multicast routing, it is usually impossible to
build a complete shortest-path delivery tree.  The first occurs when  Incomplete trees are a
fact of life because each OSPF area's complete topological and group
membership information is not distributed between OSPF areas.
Topological estimates are made through the use of wild-card receivers
and OSPF Summary-Links LSAs.

If the source
subnetwork is located of a multicast datagram resides in the same area as the
router performing the
calculation.  For calculation, the second case, pruning process must be careful
to ensure that branches leading to other areas are not removed from the
tree.  Only those branches having no group members nor wild-card
multicast receivers are pruned.  Branches containing wild-card multicast
receivers must be retained since the local routers do not know whether
there are any group members residing in other areas.

If the source subnetwork of a multicast datagram resides in a different area than
the router performing the calculation.  The final
case occurs when calculation, the details describing the local
topology surrounding the source subnetwork is located in a different AS
than station are not known.  However, this
information can be estimated using information provided by Summary-Links
LSAs for the router performing source subnetwork.  In this case, the base of the tree

begins with branches directly connecting the calculation.

The first two cases are similar source subnetwork to each
of the local area's inter-area examples described in multicast forwarders.  Datagrams sourced
from outside the previous section.  The only enhancement is that inter-AS local area will enter the area via one of its inter-
area multicast
forwarders forwarders, so they all must also be included on part of the pruned shortest path delivery candidate
distribution tree.  Branches containing inter-AS

Since each inter-area multicast forwarders forwarder is also an ABR, it must be
retained since the local routers do not know if there are group members
residing in other Autonomous Systems.  When
maintain a separate link state database for each attached area.  Thus
each inter-area multicast datagram arrives
at an inter-AS multicast forwarder, it forwarder is the responsibility of the ASBR required to determine whether the datagram should be forwarded outside of the
local Autonomous System.

Figure 22 illustrates calculate a sample inter-AS shortest path delivery separate
forwarding tree when
the source subnetwork resides in the same area as the router performing
the calculation.

========================================================================

                 -----------------------------------
                |              S     Area 1         |
                |              |                    |
                |              #                    |
                |             / \                   |
                |            /   \                  |
                |           /     \                 |
                |          /       \                |
                |       O-#         #-O             |
                |        / \         \              |
                |       /   \         \             |
                |      /     \         \            |
                |     /       \         \           |
                |  O-#         #         #-O        |
                |             / \         \         |
                |            /   \         \        |
                |           /     \         \       |
                |          /       \         \      |
                |         /         #-O       \     |
                |       ---                    ---  |
                 ------| & |------------------| ? |-
                        ---                    ---
                 To other Autonomous      To Backbone
                     Systems

LEGEND

S   Source Subnetwork
O   Subnet Containing Group Members
#   Intra-Area for each of its attached areas.

7.2.4 Inter-Autonomous System Multicasting with MOSPF Router
?   Inter-Area Multicast Forwarder
&   Inter-AS Multicast Forwarder

Figure 22. Inter-AS Datagram Shortest Path Tree (Source in Same Area)
========================================================================

If

Inter-Autonomous System multicasting involves the situation where a
datagram's source or some of a multicast datagram resides its destination group members are in a
different OSPF Autonomous
System than the router performing the calculation, the details
describing the local topology surrounding the source station are not
known.   However, this information can be estimated using the multicast-
capable AS-External Links describing the source subnetwork. Systems.  In this
case, the base of OSPF terminology, "inter-AS"
communication also refers to connectivity between an OSPF domain and
another routing domain which could be within the tree begins with branches directly connecting same Autonomous System
from the
source subnetwork to each perspective of the local area's an Exterior Gateway Protocol.

To facilitate inter-AS multicast
forwarders.

========================================================================

                               S
                               |
                               :
                               |
                       AS-External links
                               |
                              ---
                 ------------| & |-----------------
                |             ---                  |
                |             / \                  |
                |            /   \     Area 1      |
                |           /     \                |
                |          /       \               |
                |       O-#         #-O            |
                |        / \         \             |
                |       /   \         \            |
                |      /     \         \           |
                |     /       \         \          |
                |  O-#         #         #-O       |
                |             / \         \        |
                |            /   \         \       |
                |           /     \         \      |
                |          /       \         \     |
                |         /         #-O       #-O  |
                |       ---                        |
                 ------| ? |-----------------------
                        ---
                        To
                     Backbone
LEGEND

S   Source Subnetwork
O   Subnet Containing Group Members
#   Intra-Area routing, selected Autonomous System
Boundary Routers (ASBRs) are configured as "inter-AS multicast
forwarders."  MOSPF Router
?   Inter-Area Multicast Forwarder
&   Inter-AS Multicast Forwarder

Figure 23. Inter-AS Datagram Shortest Path Tree (Source in Different AS)
========================================================================

Figure 23 shows a sample inter-AS shortest-path delivery tree when makes the assumption that each inter-AS multicast
forwarder executes an inter-AS multicast forwarder resides routing protocol which forwards
multicast datagrams in a reverse path forwarding (RPF) manner.  Since
the same area as the router
performing publication of the calculation.  If MOSPF RFC, a term has been defined for such a
router:  Multicast Border Router.  See section 9 for an overview of the
MBR concepts.  Each inter-AS multicast forwarder is
located in a different area than wildcard multicast
receiver in each of its attached areas.  This guarantees that each
inter-AS multicast forwarder remains on all pruned shortest-path trees
and receives all multicast datagrams.

The details of inter-AS forwarding are very similar to inter-area
forwarding.  On the router performing "inside" of the calculation, OSPF domain, the topology surrounding multicast ASBR
must conform to all the source is approximated by combining requirements of intra-area and inter-area
forwarding.  Within the
Summary-ASBR Link with OSPF domain, group members are reached by the multicast capable AS-External Link.

As a final point, it is important
usual forward path computations, and paths to note that AS External Links external sources are not
imported into Stub areas.  If the source is located outside of
approximated by a reverse-path source-based tree, with the stub
area, multicast
ASBR standing in for the topology surrounding actual source.  When the source is estimated by within the Default
Summary Links originated by
OSPF AS, and there are external group members, it falls to the stub area's intra-area inter-
AS multicast
forwarder rather than forwarders, in their role as wildcard receivers, to make
sure that the data gets out of the AS-External Links. OSPF domain and sent off in the
correct direction.

7.3 Protocol-Independent Multicast (PIM)

The Protocol Independent Multicast (PIM) routing protocol is currently
under development protocols have been
developed by the Inter-Domain Multicast Routing (IDMR) working group of
the IETF.  The objective of the IDMR working group is to
develop one--or possibly more than one--standards-track multicast
routing protocol(s) develop one--or
possibly more than one--standards-track multicast routing protocol(s)

that can provide scaleable multicast routing across the Internet.

PIM is actually two protocols:  PIM - Dense Mode (PIM-DM) and PIM -
Sparse Mode (PIM-SM).  In the remainder of this introduction, any
references to "PIM" apply equally well to either of the two protocols...
there is no intention to imply that can provide scaleable inter-domain multicast
routing across the Internet. there is only one PIM protocol.
While PIM-DM and PIM-SM share part of their names, and they do have
related control messages, they are actually two completely independent
protocols.

PIM receives its name because it is not dependent on the mechanisms
provided by any particular unicast routing protocol.  However, any
implementation supporting PIM requires the presence of a unicast routing
protocol to provide routing table information and to adapt to topology
changes.

PIM makes a clear distinction between a multicast routing protocol that
is designed for dense environments and one that is designed for sparse
environments.  Dense-mode refers to a protocol that is designed to
operate in an environment where group members are relatively densely
packed and bandwidth is plentiful.  Sparse-mode refers to a protocol
that is optimized for environments where group members are distributed
across many regions of the Internet and bandwidth is not necessarily
widely available.  It is important to note that sparse-mode does not
imply that the group has a few members, just that they are widely
dispersed across the Internet.

The designers of PIM PIM-SM argue that DVMRP and MOSPF were developed for
environments where group members are densely distributed. distributed, and bandwidth
is relatively plentiful.  They emphasize that when group members and
senders are sparsely distributed across a wide area, DVMRP and MOSPF
do not provide the most efficient multicast delivery service.  The
DVMRP periodically sends multicast packets over many links that do not
lead to group members, while MOSPF can send group membership
information over links that do not lead to senders or receivers.

7.3.1 PIM-Dense PIM - Dense Mode (PIM-DM)

While the PIM architecture was driven by the need to provide scaleable
sparse-mode delivery trees, PIM also defines a new dense-mode protocol
instead of relying on existing dense-mode protocols such as DVMRP and
MOSPF.  It is envisioned that PIM-DM would be deployed in resource rich
environments, such as a campus LAN where group membership is relatively
dense and bandwidth is likely to be readily available.  PIM-DM's control
messages are similar to PIM-SM's by design.

[This space was intentionally left blank.]

PIM - Dense Mode (PIM-DM) is similar to DVMRP in that it employs the
Reverse Path Multicasting (RPM) algorithm.  However, there are several
important differences between PIM-DM and DVMRP:

    o  To find routes back to sources, PIM-DM relies on the presence
       of an existing unicast routing table.  PIM-DM is independent of
       the mechanisms of any specific unicast routing protocol.  In
       contrast, DVMRP contains an integrated routing protocol that
       makes use of its own RIP-like exchanges to build its own unicast
       routing table (so a router may orient itself with respect to
       active source(s). source(s)).  MOSPF augments the information in the OSPF
       link state database, thus MOSPF must run in conjunction with
       OSPF.

    o  Unlike the DVMRP which calculates a set of child interfaces for
       each (source, group) pair, PIM-DM simply forwards multicast
       traffic on all downstream interfaces until explicit prune
       messages are received.  PIM-DM is willing to accept packet
       duplication to eliminate routing protocol dependencies and
       to avoid the overhead inherent in determining the parent/child
       relationships.

For those cases where group members suddenly appear on a pruned branch
of the delivery tree, PIM-DM, like DVMRP, employs graft messages to
re-attach the previously pruned branch to the delivery tree.

8.  SHARED TREE ("SPARSE MODE") "SPARSE MODE" ROUTING PROTOCOLS

The most recent additions to the set of multicast routing proto- cols protocols are based on
called "sparse mode" protocols.  They are designed from a shared delivery tree. different
perspective than the "dense mode" protocols that we have already
examined.  Often, they are not data-driven, in the sense that forwarding
state is set up in advance, and they trade off using bandwidth liberally
(which is a valid thing to do in a campus LAN environment) for other
techniques that are much more suited to scaling over large WANs, where
bandwidth is scarce and expensive.

These emerging routing protocols include:

    o  Protocol Independent Multicast - Sparse Mode (PIM-SM), and

    o  Core-Based Trees (CBT).

Each of

While these routing protocols is are designed to operate efficiently over a
wide area network where bandwidth is scarce and group members may be
quite sparsely distributed.  Their ultimate goal distributed, this is not to provide scaleable
interdomain multicast routing imply that they are only
suitable for small groups.  Sparse doesn't mean small, rather it is
meant to convey that the groups are widely dispersed, and thus it is
wasteful to (for instance) flood their data periodically across the Internet.
entire internetwork.

8.1  Protocol-Independent Multicast - Sparse Mode (PIM-SM)

As described previously, PIM also defines a "dense-mode" or source-based
tree variant.  The  Again, the two protocols are quite unique, and other than
control messages, they have very little else in common.  Because  Note that while PIM
integrates control message processing and data packet forwarding among
PIM-Sparse and -Dense Modes, a single PIM router can PIM-SM and PIM-DM must run different modes
for different groups, as desired. in separate
regions.  All groups in a region are either sparse-mode or dense-mode.

PIM-Sparse Mode (PIM-SM) is being has been developed to provide a multicast
routing protocol that provides efficient communication between members
of sparsely distributed groups--the type of groups that are likely to
be common in wide-area internetworks.  PIM's designers observe observed that
several hosts wishing to participate in a multicast conference do not
justify flooding the entire internetwork periodically with the group's
multicast traffic.

Noting today's existing MBone scaling problems, and extrapolating to a
future of ubiquitous multicast (overlaid with perhaps thousands of
small, widely dispersed groups), it is not hard to imagine that existing
multicast routing protocols will experience scaling problems.  To
eliminate these potential scaling issues, PIM-SM is designed to limit
multicast traffic so that only those routers interested in receiving
traffic for a particular group "see" it.

PIM-SM differs from existing dense-mode protocols in two key ways:

    o  Routers with adjacent or downstream members are required to
       explicitly join a sparse mode delivery tree by transmitting
       join messages.  If a router does not join the pre-defined
       delivery tree, it will not receive multicast traffic addressed
       to the group.

       In contrast, dense-mode protocols assume downstream group
       membership and forward multicast traffic on downstream links
       until explicit prune messages are received.  Thus, the default
       forwarding action of dense-mode routing protocols is to forward
       all traffic, while the default action of a sparse-mode protocol
       is to block traffic unless it has been explicitly requested.

    o  PIM-SM evolved from the Core-Based Trees (CBT) approach in that
       it employs the concept of a "core" (or rendezvous point (RP) in
       PIM-SM terminology) where receivers "meet" sources.  The creator
       of each multicast group selects a primary RP and a small set of
       alternative RPs, known as the RP-set.  For each group, there is
       only a single active RP (which is uniquely determined by a hash
       function).

[This space was intentionally left blank.]

========================================================================

                    S1                                      S2
                   ___|___                                 ___|___
                        |                                   |
                        |                                   |
                        #                                   #
                         \                                 /
                          \            Primary                               /
                           \_____________RP______________/
                                        /|\
                                       ./|\.
                      ________________// | \\_______________
                     /         _______/  |  \______         \
                     #         #         #         #         #
                  ___|___   ___|___   ___|___   ___|___   ___|___
                        |     |   |        |     |            |
                        R     R   R        R     R            R
LEGEND

   #   PIM Router
   R   Multicast Receiver

Figure 24 Primary 17: Rendezvous Point
========================================================================

When joining a group, each receiver uses IGMP to notify its directly-
attached router, which in turn joins the multicast delivery tree by
sending an explicit PIM-Join message hop-by-hop toward the group's
primary
RP.  A source uses the RP to announce its presence, and act as a conduit
to members that have joined the group.  This model requires sparse-mode
routers to maintain a bit of state (i.e., the (the RP-set for
each defined the sparse-mode group)
region) prior to the arrival of data.  In contrast, dense mode because dense-mode
protocols are data-driven, since they do not store any state for a group until
the arrival of the its first data packet.

There is only one RP-set per sparse-mode domain, not per group.
Moreover, the creator of a group is not involved in RP selection.  Also,
there is no such concept as a "primary" RP.  Each group has precisely
one RP at any given time.  In the event of the failure of an RP, a new
RP-set is distributed which does not include the failed RP.

8.1.1 Directly Attached Host Joins a Group

When there is more than one PIM router connected to a multi-access LAN,
the router with the highest IP address is selected to function as the
Designated Router (DR) for the LAN.  The DR may or may not be
responsible for the transmission of IGMP Host Membership Query messages,
but does send Join/Prune messages toward the RP, and maintains the
status of the active RP for local senders to multicast groups.

When the DR receives an IGMP Report message for a new group, the DR
determines if the group is RP-based or not by examining the group
address.  If the address indicates a SM group (by virtue of the group-
specific state that even inactive groups have stored in all PIM
routers), the DR performs a deterministic hash function over the
sparse-mode region's RP-set to uniquely determine the RP for the
group.

========================================================================

                                      Source (S)
                                      _|____
                                         |
                                         |
                                         #
                                        / \
                                       /   \
                                      /     \
                                     #       #
                                    /         \
                   Designated      /           \
       Host      | Router         /             \  Rendezvous Point
            -----|- # - - - - - -#- - - - - - - -RP   for group G
      (receiver) |  ----Join-->  ----Join-->
                 |

LEGEND

   #   PIM Router                 RP  Rendezvous Point

Figure 25: 18: Host Joins a Multicast Group
========================================================================

group's RP-set to uniquely determine the primary RP for the group.
(Otherwise, this is a dense-mode group and dense-mode forwarding rules
apply.)

After performing the lookup, the DR creates a multicast forwarding cache entry
for the (*, group) pair and transmits a unicast PIM-Join message toward
the primary RP for this specific group.  The (*, group) notation
indicates an (any source, group) pair.  The intermediate routers forward
the unicast PIM-Join message, creating a forwarding cache entry for the
(*, group) pair only if such a forwarding entry does not yet exist.
Intermediate routers must create a forwarding cache entry so that they will be
able to forward future traffic downstream toward the DR which originated
the PIM-Join message.

8.1.2 Directly Attached Source Sends to a Group

When a source first transmits a multicast packet to a group, its DR
forwards the datagram to the primary RP for subsequent distribution
along the group's delivery tree.  The DR encapsulates the initial

multicast packets in a PIM-SM-Register packet and unicasts them toward
the primary RP for the group.  The PIM-SM-Register packet informs the
RP of a new source which causes the active RP to transmit PIM-Join
messages back toward the source's DR.  The routers between the RP and
the source's DR use the re- ceived received PIM-Join messages (from the RP) to
create forwarding state for the new (source, group) pair.  Now all
routers from the active RP for this sparse-mode group to the source's DR
will be able to forward future unencapsulated multicast packets from
this source subnetwork to the RP.  Until the (source, group) state has
been created in all the routers between the RP and source's DR, the DR
must continue to send the source's multicast IP packets to the RP as
unicast packets encapsulated within unicast PIM-Register packets.  The
DR may stop forwarding multicast packets encapsulated in this manner
once it has received a PIM-Register-Stop message from the active RP for
this group.  The RP may send PIM-Register-Stop messages if there are no
downstream receivers for a group, or if the RP has successfully joined
the (source, group) tree (which originates at the source's DR).

========================================================================

                                Source (S)
                                _|____
                                   |
                                   |
                                   #
                                  / \ v
                                  /.\ ,
                                 /  ^\ v
                                /    .\ ,
                               #      ^# v
                              /        .\ ,
             Designated      /          ^\ v
 Host      | Router         /            .\ v ,             |       Host
      -----|-#- - - - - - -#- - - - - - - -RP- - - # - - -|-----
(receiver) |  <~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~   ~ ~ ~ ~ ~ ~>   | (receiver)

LEGEND

   #   PIM Router
   RP  Rendezvous Point
> , >  PIM-Register
< . <  PIM-Join
~ ~ ~  Resend to group members

Figure 26: 19: Source sends to a Multicast Group
========================================================================

8.1.3 Shared Tree (RP-Tree) or Shortest Path Tree (SPT)?

The RP-tree provides connectivity for group members but does not
optimize the delivery path through the internetwork.  PIM-SM allows
receivers
routers to either a) continue to receive multicast traffic over the
shared RP-tree RP-tree, or over b) subsequently create a source-based shortest-path
tree that a receiver
subsequently creates.  The shortest-path tree allows a group member to
reduce on behalf of their attached receiver(s).  Besides reducing the
delay between itself this router and a particular source. the source (beneficial to its attached
receivers), the shared tree also reduces traffic concentration effects
on the RP-tree.

A PIM PIM-SM router with local receivers has the option of switching to the
source's shortest-path tree (i.e., source-based tree) once it starts
receiving data packets from the source.  The change- over change-over may be
triggered if the data rate from the source exceeds a predefined
threshold.  The local receiver's DR last-hop router does this by sending a
Join message toward the active source.  After the source-based SPT is
active, protocol mechanisms allow a Prune message for the same source
to be transmitted to the active RP, thus removing this router from the
shared RP-tree.  Alternatively, the DR may be configured to continue
using the shared RP-tree and never switch over to the source-based SPT,
or a router could perhaps use a different administrative metric to
decide if and when to switch to a source-based tree.

========================================================================

                                          Source (S)
                                          _|____
                                             |
                                            %|
                                           % #
                                          % / \*
                                         % /   \*
                                        % /     \*
                    Designated         % #       #*
                     Router           % /         \*
                                     % /           \*
           Host      |  <-% % % % % % /             \v
                -----|-#- - - - - - -#- - - - - - - -RP
          (receiver) | <* * * * * * * * * * * * * * *
                     |
LEGEND

  #   PIM Router
  RP  Rendezvous Point
 * *   RP Tree
% %   SPT Tree

Figure 27: Shared RP-Tree and Shortest Path Tree (SPT)
========================================================================

8.1.4 Unresolved Issues

It is important to note that PIM is an Internet draft.  This means that
it is still early in its development cycle and clearly a "work in

progress."  There are several important issues that require further
research, engineering, and/or experimentation:

    o  PIM-SM requires routers to maintain a non-trivial
       amount of state information to describe sources
       and groups.

    o  Some multicast routers will be required to have
       both PIM interfaces and non-PIM interfaces.  The
       interaction and sharing of multicast routing
       information between PIM  RP-Tree (Shared)
 % %  Shortest-Path Tree (Source-based)

Figure 20: Shared RP-Tree and other multicast
       routing protocols is still Shortest Path Tree (SPT)
========================================================================

Besides a last-hop router being defined.

Due able to these reasons, especially the need switch to get operational experience
with the protocol, when PIM a source-based tree,
there is finally published as an RFC, it will not
immediately be placed on also the standards-track; rather it will be
classified as experimental.  After sufficient operational experience
has been obtained, presumably a slightly altered specification will be
defined that incorporates lessons learned during capability of the experimentation
phase, and that new specification will then RP for a group to transition to a
source's shortest-path tree.  Similar controls (bandwidth threshhold,
administrative weights, etc.) can be placed on the standards
track. used at an RP to influence these
decisions.

8.2 Core-Based Core Based Trees (CBT)

Core Based Trees is another multicast architecture that is based on a
shared delivery tree.  It is specifically intended to address the
important issue of scalability when supporting multicast applications
across the public Internet.  CBT is also designed to enable
interoperability between distinct "clouds" on the Internet, each
executing a different multicast routing protocol.

Similar to PIM, PIM-SM, CBT is protocol-independent.  CBT employs the
information contained in the unicast routing table to build its shared
delivery tree.  It does not care how the unicast routing table is
derived, only that a unicast routing table is present. This feature
allows CBT present.  This feature
allows CBT to be deployed without requiring the presence of any specific
unicast routing protocol.

Another similarity to PIM-SM is that CBT has adopted the core discovery
mechanism ("bootstrap" ) defined in the PIM-SM specification.  For
inter-domain discovery, efforts are underway to standardize (or at least
separately specify) a common RP/Core discovery mechanism.  The intent is
that any shared tree protocol could implement this common discovery
mechanism using its own protocol message types.

In a significant departure from PIM-SM, CBT has decided to maintain it's
scaling characteristics by not offering the option of shifting from a
Shared Tree (e.g., PIM-SM's RP-Tree) to a Shortest Path Tree (SPT) to
optimize delay.  The designers of CBT believe that this is a critical
decision since when multicasting becomes widely deployed, the need for
routers to maintain large amounts of state information will become the
overpowering scaling factor.

Finally, unlike PIM-SM's shared tree state, CBT state is bi-directional.
Data may therefore flow in either direction along a branch.  Thus, data
from a source which is directly attached to an existing tree branch need
not be deployed without requiring the presence of any specific
unicast routing protocol. encapsulated.

8.2.1 Joining a Group's Shared Tree

When

A host that wants to join a multi-access network has more than one CBT router, one of the
routers is elected the designated router (DR) for the subnetwork.  The
DR is responsible for transmitting multicast group issues an IGMP Queries and for initiating the
construction of a branch host
membership report.  This message informs its local CBT-aware router(s)
that links directly-attached group members it wishes to receive traffic addressed to the shared distribution tree for the multicast group. The router on the subnetwork
with the lowest IP address is elected the IGMP Querier and also serves
as the CBT DR.

When the DR receives
Upon receipt of an IGMP Host Membership Report host membership report for a new group, it
transmits a CBT Join-Request to the next-hop
local CBT router on the unicast path

to the "target core" for issues a JOIN_REQUEST hop-by-hop toward the multicast group. The identification of group's
core router.

If the
"target core" is based on static configuration.

The Join-Request JOIN_REQUEST encounters a router that is processed by all intermediate CBT routers, each of
which identifies the interface already on which the Join-Request was received as
part of this group's delivery tree.  The intermediate routers continue
to forward
shared tree before it reaches the Join-Request core router, then that router issues a
JOIN_ACK hop-by-hop back toward the target core and to mark local
interfaces until sending router.  If the JOIN_REQUEST
does not encounter an on-tree CBT router along its path towards the
core, then the request reaches either 1) a core router, or 2) a router that is already on the distribution tree responsible for this group. responding with a
JOIN_ACK.  In either case, this each intermediate router stops forwarding that forwards the
JOIN_REQUEST towards the core is required to create a transient "join
state."  This transient "join state" includes the Join-Request multicast group, and
responds with a Join-Ack which follows
the JOIN_REQUEST's incoming and outgoing interfaces.  This information
allows an intermediate router to forward returning JOIN_ACKs along the
exact reverse path back to the DR CBT router which initiated the Join-Request.  The Join-Ack fixes the state in each of JOIN_REQUEST.

As the
intermediate routers causing JOIN_ACK travels towards the interfaces to become part of CBT router that issued the
distribution tree
JOIN_REQUEST, each intermediate router creates new "active state" for the multicast
this group.  The newly constructed branch
is made up of non-core (i.e., "on-tree") routers providing  New branches are established by having the shortest
path between a member's directly attached DR intermediate
routers remember which interface is upstream, and a core. which interface(s)
is(are) downstream.  Once a new branch is created, each child router
monitors the status of its parent router with a keepalive mechanism. mechanism,
the CBT "Echo" protocol.  A child router periodically unicasts a CBT-Echo-Request
CBT_ECHO_REQUEST to its parent router router, which is then required to respond
with a unicast CBT-Echo-Reply CBT_ECHO_REPLY message.

========================================================================

                                            #- - - -#- - - - -#
                                                    |          \
                                                    |           #
                                                    |
                                                    # - - - - #
      member  |                                     |
       host --|                                     |
              |     --Join-->  --Join-->  --Join--> |
              |- [DR] - - - [:] - - - -[:] - - - - [@]
              |     <--ACK--   <--ACK--   <--ACK--
              |

LEGEND

  [DR]  CBT Designated Router
   [:]  CBT Router
   [@]  Target Core Router
    #   CBT Router that is already on the shared tree

Figure 28: 21: CBT Tree Joining Process
========================================================================

It is only necessary to implement a single "keepalive" mechanism on each
link regardless of the number of multicast groups that are sharing the
link.  If

If, for any reason reason, the link between the child an on-tree router and its parent
should fail, or if the parent router is otherwise unreachable, the
on-tree router transmits a FLUSH_TREE message on its child interface(s)
which initiates the tearing down of all downstream branches for the
multicast group.  Each downstream router is then responsible for
re-attaching itself and its
downstream children (provided it has a directly attached group member)
to the group's shared delivery tree.

8.2.2 Primary and Secondary Cores

Instead of a single active "core" or "rendezvous point," CBT may have
multiple active cores to increase robustness.

The initiator of a
multicast group elects one of these routers as the Primary Core, while
all other cores are classified Designated Router (DR) is elected by CBT's "Hello" protocol and
functions as Secondary Cores. The Primary Core must
be uniquely identified for the entire multi- cast group.

Whenever a group member joins to a secondary core, the secondary core THE single upstream router ACKs the Join-Request and then joins toward the Primary Core.
Since each Join-Request contains the identity of the Primary Core for all groups using that link.
The DR is not necessarily the group, the secondary core can easily determine the identity of the
Primary Core best next-hop router to every core for the
every multicast group.  This simple process allows  The implication is that it is possible for a
JOIN_REQUEST to be redirected by the CBT tree DR across a link to become fully connected as individual members join the multicast
group.

========================================================================

                  +----> [PC] <-----------+
                  |       ^               |
             Join |       | Join          | Join
                  |       |               |
                  |       |               |
         [SC]    [SC]    [SC]    [SC]    [SC] <-----+
                  ^       ^               ^         |
                  |       |               |         |
             Join |       | Join     Join |    Join |
                  |       |               |         |
                  |       |               |         |
                 [x]     [x]             [x]       [x]
                  :       :               :         :
                member  member          member    member
                 host    host            host      host

     LEGEND

     [PC]  Primary Core Router
     [SC]  Secondary Core Router
      [x]  Member-hosts' directly-attached routers

Figure 29: Primary best
next-hop router providing access a given group's core.  Note that data
traffic is never duplicated across a link, only JOIN_REQUESTs, and Secondary Core Routers
========================================================================

8.2.3 the
volume of this JOIN_REQUEST traffic should be negligible.

8.2.2 Data Packet Forwarding

After

When a Join-Ack JOIN_ACK is received by an intermediate router, it either adds
the interface over which the JOIN_ACK was received to an existing
forwarding cache entry, or creates a CBT
forwarding information base (FIB) new entry listing all interfaces that
are part of if one does not already
exist for the specified group's delivery tree. multicast group.  When a CBT router receives a data packet
addressed to the multicast group, it simply forwards the packet over all
outgoing interfaces as specified by the FIB entry
for the group.

A CBT router may forward a multicast data packet in either "CBT Mode" or
"Native Mode."

    o  CBT Mode is designed for operation in heterogeneous
       environments that may include non-multicast capable
       routers or mrouters that do not implement (or are not
       configured for) CBT.  Under these conditions, CBT Mode
       is used to encapsulate the data packet in a CBT header
       and "tunnel" it between CBT-capable routers (or islands).

    o  Native Mode is designed for operation in a homogeneous
       environment where all routers implement the CBT routing
       protocol and no specialized encapsulation is required.

8.2.4 forwarding cache entry for the
group.

8.2.3 Non-Member Sending

Similar to other multicast routing protocols, CBT does not require that
the source of a multicast packet be a member of the multicast group.
However, for a multicast data packet to reach the active core tree for the
group, at least one CBT-capable router must be present on the non-member
source station's subnetwork.  The local CBT-capable router employs CBT
Mode
IP-in-IP encapsulation and unicasts the data packet toward a to the active core
for delivery to the rest of the multicast group.  When the encapsulated packet encounters an on-tree
router (or the target core), the packet is forwarded as required by the

8.2.4 CBT specification.

8.2.5 Emulating Shortest-Path Trees

The most common criticism of shared tree protocols Multicast Interoperability

Multicast interoperability is that they offer
sub-optimal routes and that they create high levels of traffic
concentration at the core routers.  One recent proposal in CBT
technology currently being defined.  Work is a mechanism to dynamically reconfigure the core-based tree
so that it becomes rooted at underway
in the source station's local CBT router. In
effect, IDMR working group to describe the CBT becomes attachment of stub-CBT and
stub-PIM domains to a source-based tree but still remains DVMRP backbone.  Future work will focus on
developing methods of connecting non-DVMRP transit domains to a DVMRP
backbone.

CBT (one
with a core that now happens to interoperability will be adjacent to achieved through the source).  If
successfully tested and demonstrated, this technique could allow CBT to
emulate a shortest-path tree, providing more-optimal routes and reducing
traffic concentration among deployment of domain
border routers (BRs) which enable the cores.  These new mechanisms are being
designed with an eye toward preserving CBT's simplicity and scalability,
while addressing key perceived weaknesses forwarding of multicast traffic
between the CBT protocol.  Note
that PIM-SM also has a similar technique whereby a source-based delivery
tree can be selected by certain receivers.

For this mechanism, every and DVMRP domains.  The BR implements DVMRP and CBT router on
different interfaces and is responsible for forwarding data across the
domain boundary.

========================================================================

         /---------------\        /---------------\
         |               |        |                |
         |               |        |                |
         |    DVMRP      |--[BR]--|  CBT Domain    |
         |   Backbone    |        |                |
         |               |        |                |
         \---------------/        \---------------/

Figure 22: Domain Border Routers (BRs)
========================================================================

The BR is also responsible for monitoring the
transmission rate and duration exporting selected routes out of each source station on a directly
attached subnetwork.  If a pre-defined threshold is exceeded, the local CBT router may initiate steps
domain into the DVMRP domain.  While the CBT stub domain never needs to transition
import routes, the DVMRP backbone needs to import routes to any sources
of traffic which are inside the CBT tree domain.  The routes must be imported
so that DVMRP can perform its RPF check.

9. INTEROPERABILITY FRAMEWORK FOR MULTICAST BORDER ROUTERS

In late 1996, the
group's receivers become joined to IETF IDMR working group began discussing a "core" formal
structure that is local to the source
station's subnetwork.  This is accomplished by having would describe the local way different multicast routing
protocols should interact inside a multicast border router
encapsulate traffic (MBR).  The
work can be found in the following internet draft:  <draft-thaler-
interop-0at CBT Mode and place has adopted the core discovery
mechanism ("bootstrap" ) defined in the PIM-SM specification.  For
inter-domain discovery, efforts are underway to standardize (or at least
separately specify) a common RP/Core discovery mechanism.  The intent is
that any shared tree protocol could implement this common discovery
mechanism using its own IP address in protocol message types.

In a significant departure from PIM-SM, CBT has decided to maintain it's
scaling characteristics by not offering the
"first-hop router" field.  All option of shifting from a
Shared Tree (e.g., PIM-SM's RP-Tree) to a Shortest Path Tree (SPT) to
optimize delay.  The designers of CBT believe that this is a critical
decision since when multicasting becomes widely deployed, the need for
routers on to maintain large amounts of state information will become the CBT
overpowering scaling factor.

Finally, unlike PIM-SM's shared tree examine the
"first-hop router" field in every state, CBT Mode data packet.  If this field
contains state is bi-directional.
Data may therefore flow in either direction along a non-NULL value, each router transmits branch.  Thus, data
from a Join-Request toward
the address specified in the "first-hop router" field.  It source which is important directly attached to note an existing tree branch need
not be encapsulated.

8.2.1 Joining a Group's Shared Tree

A host that on the publication date of this "Introduction wants to IP
Multicast Routing" RFC, these proposed mechanisms join a multicast group issues an IGMP host
membership report.  This message informs its local CBT-aware router(s)
that it wishes to receive traffic addressed to support dynamic
source-migration of cores have not yet been tested, simulated, or
demonstrated.

8.2.6 CBT Multicast Interoperability

Multicast interoperability is being defined in several stages. Stage 1
is concerned with the attachment multicast group.
Upon receipt of non-DVMRP stub domains to an IGMP host membership report for a DVMRP
backbone (e.g., new group, the MBone).  Work is currently underway in
local CBT router issues a JOIN_REQUEST hop-by-hop toward the IDMR
working group to describe group's
core router.

If the attachment of stub-CBT and stub-PIM
domains to JOIN_REQUEST encounters a DVMRP backbone.  The next stage will focus router that is already on developing
methods of connecting non-DVMRP transit domains to a DVMRP backbone.

========================================================================

         /---------------\        /---------------\
         |               |        |                |
         |               |        |                |
         |    DVMRP      |--[BR]--|  CBT Domain    |
         |   Backbone    |        |                |
         |               |        |                |
         \---------------/        \---------------/

     Figure 30: Domain Border Routers (BRs)
========================================================================

CBT interoperability will be achieved through the deployment of domain
border routers (BRs) which enable group's
shared tree before it reaches the forwarding of multicast traffic
between core router, then that router issues a
JOIN_ACK hop-by-hop back toward the sending router.  If the JOIN_REQUEST
does not encounter an on-tree CBT and DVMRP domains.  The BR implements DVMRP and CBT on
different interfaces and is responsible for forwarding data across router along its path towards the
domain boundary.

The BR
core, then the core router is also responsible for exporting selected routes out of the CBT
domain into responding with a
JOIN_ACK.  In either case, each intermediate router that forwards the DVMRP domain.  While
JOIN_REQUEST towards the CBT domain never needs core is required to
import routes, create a transient "join
state."  This transient "join state" includes the DVMRP backbone needs multicast group, and
the JOIN_REQUEST's incoming and outgoing interfaces.  This information
allows an intermediate router to import routes forward returning JOIN_ACKs along the
exact reverse path to sources of

traffic from within the CBT domain.  The routes must be imported so router which initiated the JOIN_REQUEST.

As the JOIN_ACK travels towards the CBT router that
DVMRP can perform issued the RPF check (which is required
JOIN_REQUEST, each intermediate router creates new "active state" for construction
this group.  New branches are established by having the intermediate
routers remember which interface is upstream, and which interface(s)
is(are) downstream.  Once a new branch is created, each child router
monitors the status of its forwarding table).

9. REFERENCES

9.1 Requests for Comments (RFCs)

    1075   "Distance Vector Multicast Routing Protocol," D. Waitzman,
            C. Partridge, and S. Deering, November 1988.

    1112   "Host Extensions for IP Multicasting," Steve Deering,
            August 1989.

    1583   "OSPF Version 2," John Moy, March 1994.

    1584   "Multicast Extensions parent router with a keepalive mechanism,
the CBT "Echo" protocol.  A child router periodically unicasts a
CBT_ECHO_REQUEST to OSPF," John Moy, March 1994.

    1585   "MOSPF: Analysis and Experience," John Moy, March 1994.

    1700    "Assigned Numbers," J. Reynolds and J. Postel, October
             1994. (STD 2)

    1800    "Internet Official Protocol Standards," Jon Postel,
             Editor, July 1995.

    1812    "Requirements for IP version 4 Routers," Fred Baker,
             Editor, June 1995

9.2 Internet Drafts

   "Core Based Trees (CBT) Multicast: Architectural Overview,"
    <draft-ietf-idmr-cbt-arch-03.txt>, its parent router, which is then required to respond
with a unicast CBT_ECHO_REPLY message.

========================================================================

                                            #- - - -#- - - - -#
                                                    |          \
                                                    |           #
                                                    |
                                                    # - - - - #
      member  |                                     |
       host --|                                     |
              |     --Join-->  --Join-->  --Join--> |
              |- [DR] - - - [:] - - - -[:] - - - - [@]
              |     <--ACK--   <--ACK--   <--ACK--
              |

LEGEND

  [DR]  CBT Designated Router
   [:]  CBT Router
   [@]  Target Core Router
    #   CBT Router that is already on the shared tree

Figure 21: CBT Tree Joining Process
========================================================================
    ietf-idmr-cbt-spec-07.txt>, A. J. Ballardie, September 19,
    1996. March 1997.

   "Core Based Trees Tree (CBT) Multicast: Protocol Specification," <draft-
    ietf-idmr-cbt-spec-06.txt>, Multicast Border Router Specification for
    Connecting a CBT Stub Region to a DVMRP Backbone," <draft-ietf-
    idmr-cbt-dvmrp-00.txt>, A. J. Ballardie, November 21, 1995.

   "Hierarchical Distance March 1997.

   "Distance Vector Multicast Routing for the MBone,"
    Ajit Thyagarajan and Steve Deering, July 1995. Protocol," <draft-ietf-idmr-
    dvmrp-v3-04.ps>, T. Pusateri, February 19, 1997.

   "Internet Group Management Protocol, Version 2," <draft-ietf-
    idmr-igmp-v2-05.txt>,
    idmr-igmp-v2-06.txt>, William Fenner, October 25, 1996. January 22, 1997.

   "Internet Group Management Protocol, Version 3," <draft-cain-
    igmp-00.txt>, Brad Cain, Ajit Thyagarajan, and Steve Deering,
    Expires March 8, 1996.
    Expired.

   "Protocol Independent Multicast (PIM): Motivation and Architecture,"
    <draft-ietf-idmr-pim-arch-04.ps>, S. Deering, Multicast-Dense Mode (PIM-DM): Protocol
    Specification," <draft-ietf-idmr-pim-dm-spec-04.ps>, D. Estrin,
    D. Farinacci, A. Helmy, V. Jacobson, C. Liu, and L. Wei, September 11, 12, 1996.

   "Protocol Independent Multicast (PIM), Dense Multicast-Sparse Mode Protocol
    Specification," <draft-ietf-idmr-pim-dm-spec-04.ps>, (PIM-SM): Motivation
    and Architecture," <draft-ietf-idmr-pim-arch-04.ps>, S. Deering,
    D. Estrin, D. Farinacci, V. Jacobson, C. Liu, and L. Wei, P. Sharma, and
    A. Helmy, September 16,
    November 19, 1996.

   "Protocol Independent Multicast-Sparse Mode (PIM-SM): Protocol
    Specification," <draft-ietf-idmr-pim-sm-spec-09.ps>, S. Deering, D.  Estrin,
    D. Farinacci, A. Helmy, D. Thaler; S. Deering, M. Handley,
    V. Jacobson, C. Liu, L. Wei, P. Sharma, and A L. Wei, October 9, 1996.

   (Note:  Results of IESG review were announced on December 23, 1996:
    This internet-draft is to be published as an Experimental RFC.)

   "PIM Multicast Border Router (PMBR) specification for connecting
    PIM-SM domains to a DVMRP Backbone," <draft-ietf-mboned-pmbr-
    spec-00.txt>, D. Estrin, A. Helmy, September 19, D. Thaler, Febraury 3, 1997.

   "Administratively Scoped IP Multicast," <draft-ietf-mboned-admin-ip-
    space-01.txt>, D. Meyer, December 23, 1996.

   "Interoperability Rules for Multicast Routing Protocols," <draft-
    thaler-interop-00.txt>, D. Thaler, November 7, 1996.

9.3

    See the IDMR home pages for an archive of specifications:

    <URL:http://www.cs.ucl.ac.uk/ietf/public_idmr/>
    <URL:http://www.ietf.org/html.charters/idmr-charter.html>

10.3 Textbooks

    Comer, Douglas E. Internetworking with TCP/IP Volume 1 Principles,
    Protocols, and Architecture Second Edition, Prentice Hall, Inc.
    Englewood Cliffs, New Jersey, 1991

    Huitema, Christian. Routing in the Internet, Prentice Hall, Inc.
    Englewood Cliffs, New Jersey, 1995

    Stevens, W. Richard. TCP/IP Illustrated: Volume 1 The Protocols,
    Addison Wesley Publishing Company, Reading MA, 1994

    Wright, Gary and W. Richard Stevens. TCP/IP Illustrated: Volume 2
    The Implementation, Addison Wesley Publishing Company, Reading MA,
    1995

9.4

10.4 Other

    Deering, Steven E. "Multicast Routing in a Datagram
    Internetwork," Ph.D. Thesis, Stanford University, December 1991.

    Ballardie, Anthony J. "A New Approach to Multicast Communication
    in a Datagram Internetwork," Ph.D. Thesis, University of London,
    May 1995.

10.

   "Hierarchical Distance Vector Multicast Routing for the MBone,"
    Ajit Thyagarajan and Steve Deering, July 1995.

11. SECURITY CONSIDERATIONS

Security issues are not discussed in this memo.

11.

12. ACKNOWLEDGEMENTS

This RFC would not have been possible without the encouragement of Mike
O'Dell and the support of Joel Halpern and David Meyer.  Also invaluable
was the feedback and comments of the IETF MBoneD and IDMR working groups.
Certain people spent considerable time commenting on and discussing this
paper with the authors, and deserve to be mentioned by name:  Tony
Ballardie, Steve Casner, Jon Crowcroft, Steve Deering, Bill Fenner, Hugh
Holbrook, Cyndi Jung, Shuching Shieh, Dave Thaler, and Nair Venugopal.
Our apologies to anyone we unintentionally neglected to list here.

13. AUTHORS' ADDRESSES

    Chuck Semeria

    Tom Maufer
      3Com Corporation
      5400 Bayfront Plaza
      P.O. Box 58145
      Santa Clara, CA 95052-8145

      Phone:  +1 408 764-7201 764-8814
      Email:  <Chuck_Semeria@3Com.com>

    Tom Maufer  <maufer@3Com.com>

    Chuck Semeria
      3Com Corporation
      5400 Bayfront Plaza
      P.O. Box 58145
      Santa Clara, CA 95052-8145

      Phone:  +1 408 764-8814 764-7201
      Email:  <maufer@3Com.com>  <semeria@3Com.com>