* WGs marked with an * asterisk has had at least one new draft made available during the last 5 days

Netconf Status Pages

Network Configuration (Active WG)
Ops Area: Ignas Bagdonas, Warren Kumari | 2003-Apr-30 —  
Chairs
 
 


IETF-103 netconf minutes

Session 2018-11-05 0900-1100: Chitlada 3 - Audio stream - netconf chatroom

Minutes

minutes-103-netconf-01 minute



          DIRECT LINK TO YOUTUBE VIDEO:
             https://www.youtube.com/watch?v=lYVH5U6j-FQ&index=6&list=PLC86T-6ZTP5jPVzJ6juHM9W5ml4NpMEAC.
          
          Session Intro & WG Status (15 minutes)
             Ignas - Status update, zerotouch should be on Telechat in December.
             Kent/Ignas - Request for volunteers for shepherd writeup.
          
          Chartered items:
          
             Kent Watsen (15 min)
             Status and Issues on Client-Server Drafts
             https://tools.ietf.org/html/draft-ietf-netconf-crypto-types-02
             https://tools.ietf.org/html/draft-ietf-netconf-trust-anchors-02
             https://tools.ietf.org/html/draft-ietf-netconf-keystore-07
             https://tools.ietf.org/html/draft-ietf-netconf-ssh-client-server-08
             https://tools.ietf.org/html/draft-ietf-netconf-tls-client-server-08
             https://tools.ietf.org/html/draft-ietf-netconf-netconf-client-server-08
             https://tools.ietf.org/html/draft-ietf-netconf-restconf-client-server-08
          
          Mikael A: I just want to say I like that we're not discussing if we
          should be able to set keepalives because I want to be able to configure
          it at all layer so fully support the effort and then exactly how to do
          it I don't have a strong opinion but this seems good to me so far.
          
          Jason S: I am just trying to understand the concept of having it like turn
          on keepalive at TCP layer on a server you're gonna turn on globally and
          then it's gonna have keepalives for every TCP session, or are you saying
          that every TCP session that's using NETCONF. What is the granularity of
          the control you're looking at.
          
          Kent W: Well it'd be in a configuration data model and these are
          really groupings so whatever grouping stack is,  in this case NETCONF,
          for instance, it's using the ssh client server grouping which itself
          would use TCP client-server grouping so you would inherit some keepalive
          configuration for the TCP keepalive from that grouping and then likewise
          some ssh keepalive configuration from that grouping and if ever we get
          around to it some NETCONF level keepalive from that grouping and it
          would only apply to that particular configured stack.
          
          Jason S: Okay so that's not a combination and so you'd be turning on
          TCP keep lives for this NETCONF session correct?
          
          Kent W: Not for the entire operating system
          
          Jason S: Yeah okay thanks.
          
          Tim C: I will say, I think the approach to having keepalives at every
          layer makes sense because again last time we talked about it it's needed
          for that piece of it. To answer Jason's question is that, indeed,
          this is within the context of the NETCONF session. I will say that
          as we were doing this for some of the stuff within the BroadbandForum
          work that some of the implementations are that when you turn this on
          you touch those TCP keeplives it's actually for all the sessions. That
          there's some limitations on some of the implementations. Not that the
          approach still should be the context of the NETCONF session. Sometimes
          the implementation says that the only thing that you
          can twiddle is on all the sessions on some of the implementations. My
          concern with the refactoring is that we've got modules coming out of the
          wazoo. I mean there's just a ton of modules. I get what you're doing but
          there's just a lot of modules now. If you're gonna do if every layer
          ... So I'm just wondering if the organization and housekeeping gets
          beyond the benefit.
          
          Kent W: And that's what I meant by anticipating exasperation. I have
          somewhat worn out the welcome here in terms of the refactorings we've
          done and I get it. But i also see this as being the best way to solve
          this problem.
          
          Mikael A: In Linux there is a knob to turn on this system wide so you
          could test that and see whatever you come up with for TCP, would fit
          into IETF system for instance and know if it would make sense. A test
          of this module to see if generic enough. If it is, I would say publish
          it, because I might want to support this. At the system level as default
          settings and also at the NETCONF server session level, so that the NETCONF
          server would use the socket option to turn this on for just its sessions
          and I would also like in IETF systems model for the entire system.
          
          Kent W: Both. You are in support of this and also modifying IETF system
          potentially.
          
          Mikael A: I'm saying at least spend 10 minutes on seeing if the the way
          you do the TCP part is generic enough that it would be able to fit into
          a IETF system. I would support an effort to do that for IETF system as
          well, because as far as I know there is no system-wide settings and at
          least some operating systems do turn TCP keepalives on by default.
          
          Kent W: Okay well look into it.
          
          Rob W: I think separating modules probably makes sense, but doesn't mean
          they have to separate drafts. All of these could be bundled. Some of
          these modules together are relatively small, and that may reduce some
          of the overhead or process overhead here.
          
          Mikael A: I wouldn't even need it per connection, or IP or something. I
          just need a knob default on, or default off for all sessions.
          
          Kent W: At system?
          
          Mikael A: No, for NETCONF. TCP keepalive for NETCONF and SSH. I don't even
          need that. I don't know if someone else expressed the need to do this on
          a per connection level. I just wouldn't. I want like a default. Set it on
          and off basically (at the system level?) that's all is what I need. If
          someone else needs more then okay. Then we need to do more. I'm just
          saying there are a lot of scenarios in which I think you just want to be
          able to turn it on and off and you don't need to set it per destination
          or anything.
          
          Kent W: Understood. Where would we put a global setting like this,
          is the question. Maybe IETF system?
          
          Mikael A: Default on and off for the NETCONF server for all sessions
          that come in. I don't need it to do it per IP. I just need to say the
          NETCONF server should by default keep tcp keepalive on. I don't need to
          say for this IP range or something like that.
          
          Rob W: So definitely support your suggestion saying that if you do do
          this, split out to keep the minimal and in the future it needs to be
          expanded that could be done in the future revision. Just trying to avoid
          feature creep.
          
          Kent W: Right and actually already that's the strategy we've taken
          like for instance with the SSH and TLS. We're really just focusing on
          the minimal necessary to configure the crypto stack so that we can do
          it. But if you look at various SSH and TLS implementations there's many
          more configurable options we're not touching any of them. So more the
          same in that regard.
          
          Tim C: Just to make the comment from Michael, is that we're using it for
          various applications. So there's a TR 301within the Broadband Forum that
          does call home that uses TCP keepalives. And so there's a the generic
          setting, which may or may not work. I don't know. Not sure it works
          because it's on a stream basis and we've got multiple endpoints that
          we have to talk to. But we've also started looking at the augmentation
          because we weren't waiting when you're gonna do right. I think we can
          work with you on trying to figure out, how to get the best adaptation
          possible to work in different scenarios.
          
          Kent W: If you know people could collaborate with me I'd truly believe
          this could be done by 104. We would still have to figure out how to do
          keep lives and I don't have an alternative proposal. This is the only
          proposal I have at the moment. I don't think the effort is huge if the
          co-author could help me. I do believe we could get it done really with
          probably just one more month worth of time.
          
             Alex Clemm and Reshad Rahman (10 min)
             Update on YANG Push and Related Drafts
             https://tools.ietf.org/html/draft-ietf-netconf-yang-push-20
             https://tools.ietf.org/html/draft-ietf-netconf-subscribed-notifications-18
             https://tools.ietf.org/html/draft-ietf-netconf-netconf-event-notifications-14
             https://tools.ietf.org/html/draft-ietf-netconf-restconf-notif-09
          
          Kent W: Okay. I'll just make some comments. Thank you Reshad for helping
          with the RESTCONF notif draft. It wasn't one of the original three that we
          had discussed, but adding it I think was good, mostly because it actually
          helped us find some other issues right in the SUBSCRIBE notifications
          draft. It had a sort of a trickle cascading benefit, I think, beyond
          just sort of enabling us to allow NETCONF and RESTCONF to move forward
          together, which in general is good for the working group. To the working
          group, as shepherd, as soon as I received these drafts, I'll also send
          out an email to the work group asking those who had posted comments to
          just review that were hoping to see was is there and let the shepherd
          know if there's anything amiss.
          
          Alex C: Just one clarification. When is send to IESG? Once the Shepherd
          review is complete? Is it serialized like that?
          
          Kent W: Yes, for the most part.  The shepherd does the writeup and then
          and shepherd isn't necessarily always the chairs but the write-up occurs
          and then the chairs discuss whether or not it's appropriate to submit for
          publication which is equivalent or synonymous to going to the IESG. It
          almost happens at the same time. But what that really means it goes to
          the AD, Ignas, in this case, where he would do a AD writeup and he needs
          to schedule it for telechat which he just mentioned a moment back. It
          could be a month out but you have to get on the calendar. Once that occurs
          then there will be a number of discuss items and the various IESG members
          will have comments, all of them will ballot on your drafts. They will be
          discuss items and you will need to resolve all those discuss items. This
          isn't really normally in the view of the working group. It happens off the
          working group list. The chairs and the shepherds who are involved in that
          process. That can take as long as it takes it and I've seen it sometimes
          go quickly a couple weeks and other times months. Just depending on how it
          goes. Then when that concludes, it goes to RFC editor and then there are
          RFC editor will look into other issues and they'll be more back and forth.
          
          Alex C: I was mostly concerned about getting it in front of the
          IESG. Ok. Thank you.
          
             Tianran Zhou (10 min)
             UDP based Publication Channel for Streaming Telemetry &
             https://tools.ietf.org/html/draft-ietf-netconf-udp-pub-channel-04
          
          Kent W: As a contributor First just a general clarification statement
          what this draft is presenting is a notif model. We have like NETCONF
          and RESTCONF notif model. This would be a UDP-based transport notif.
          
          Tianran Z: Yes
          
          Kent W: How we might characterize this draft with all the other
          notif drafts. The current notif drafts, they're really just providing
          dynamic subscription-only support because we never really got around
          to thinking that we would want to have configured NETCONF and RESTCONF
          subscription. But we do want to have configured udp-based subscriptions
          and that's how this draft came to be adopted working group supported
          item, and also I think having dynamics descriptions make sense as well,
          so that's the clarification. Secondly, I think we need to be clear that
          what we're describing here is a new protocol. This would be a new binary
          protocol. We're calling it UDP.
          
          Tianran Z: The name is UDP based publication channel
          
          Kent W: We do have a name for but it is a UDP based protocol. It is
          defining its own message header and it can contain different encodings. Is
          defining a new UDP based protocol making sense relative to making use
          of an alternative existing UDP based protocol and I know you discussed
          IPFIX and COAP. I guess with IPFIX one of the things you mentioned was
          that it doesn't support different encodings but maybe that's okay. Maybe
          just a single encoding would be okay. For COAP, you mentioned that the
          message ID was only 65 thousand. I think the concern there is that IoT
          devices and their rate of transmissions would be very slow but a high-end
          router could send 65,000 messages within a single second. It would lead
          to many issues. But if that's the only concern there is an opportunity
          for this working group to approach the COAP working group to ask them
          if there might be a possibility to extend that message header.I'm just
          exploring ideas other ways that we might be able to solve the general
          problem which the working group wants to solve which is a UDP-based
          notification message for subscribe notifications. I think we should
          still consider the solution space some more.
          
          Henk B: We are working on the concise yang telemetry draft and we are
          using COAP and the message ideas for detecting duplicates. If you expect
          to have a duplicate in a 16-bit space then you need a bigger message
          ID. But I don't think that is a concern. It can rewind in that scope,
          if you don't expect duplicates in that dimension. The association
          between the requests also. The subscription to the stream is the COAP
          token and that's eight bytes I don't think that's the problem. Maybe
          message ID is sort of misinterpret idea I think. I don't see that it
          is a problem but on the other hand you want to have this inside system
          like I heard like between line cards or in this is not leaving the the
          data store system component, I have the feeling, so maybe then having
          a listening server. This is best RESTCONF. For COAP server is a little
          bit over too much then you can just establish a UDP stream. Tt depends
          on the application. If it has to go through the internet probably COAP
          is a good idea, but if it is just for high volume inside a system you
          can basically unpack all of the overhead tuned for Internet Protocol.
          
          ???: I was just explaining that IPFIX does leave the box. it's generated
          on the line card itself and it goes out the box to a collector.
          
          Henk B: This is of course correct but I thought IPFIX was used in a
          different place. For this purpose therefore has a different scope of
          application. I think because of encoding.
          
          Kent W: Just add that conversation I've also done UDP-based logging where
          the log receiver was on the subnet to the line cards and it would receive
          all the logs and then do aggregation and compression and deduplication,
          and then send them over the LAN. I think that's your point.
          
          Henk B: Yeah that's my point.
          
          Kent W: It's over over the LAN you don't really have to tag it and
          besides if you miss one what would you do about it anyway.
          
          Henk B: My last comment is anything but binary representation doesn't
          make much sense. Inside, if you're talking about burdened by TCP state
          I think being burdened by something else as a binary is even worse. I
          think it's rather obvious not to use human readable clear text formats
          like JSON or XML I think it would defeat the initial purpose of the block.
          
          Tianran Z: I should say those requirements are from our customers so we
          designed for them.
          
          Rob S: Google. I find that the whole section of this draft to do with any
          kind of reliable delivery and discussion of how you should only really
          deploy this over reliable networks is under specified. Our operational
          experience of having tried to put something UDP-based streaming into
          production is there are no reliable delivery channels. You have bits
          of your network where you can't possibly assume that all packets are
          going to get through or there's no congestion, because there isn't the
          amount of bandwidth you can buy there is not sufficient. The cost of
          having to assume that the channel is unreliable is the law of periodic
          replication of the data. So you can deal with retransmission .I would
          go so far as to say as soon as you have to deal with retransmission you
          might as well use TCP anyway. With TCP you also get the advantages of
          knowing that reliably when you sent an event it got to the other end
          so you can reduce the number of times you need to stream data. We don't
          think that it's actually possible to do over an unreliable channel event
          based updates, because any system then can't really rely on it and with
          any kind of latency. I think you should probably add some discussion
          to your draft as to what the cost of doing this over UDP is and really
          try and figure out how retransmission works in this model. Especially
          if it actually works to a line card which is kind of the motivation
          here. You're assuming that there's a cash on the line card to have
          any packet within some known window to be requested. My suspicion
          and operational experience of having a few thousand devices that run
          telemetry at this point, across number of vendors, is that you will just
          go to TCP again; as soon as you have to deal with these problems which
          are kind of the operational realities. I don't really think we should
          be pushing the industry in a way that doesn't really work.
          
          Tianran Z: The reliability is the part that is not the real reliability
          as in TCP. It's a kind of partial liability it's a trade-off between
          reliability and UDP.
          
          Rob S: Right. But the the problem is how do I build any kind of system
          that relies on the data being there, so I can if I'm trying to do anything
          with interface statistics and I know that there might be fidelity loss
          because I've got lost packets I can't rely on it. I can't do anything
          event based because an interface goes down in my network and then
          you don't have any way to react to it.You don't know that the state
          is there. The natural requirement then is that you end up building
          a polling system to make sure that you have a current enough view to
          reconcile and our scaling analysis kind of shows as soon as you do that,
          you're going to end up with significantly more data than you would via
          TCP. This scalability argument kind of falls down. We've been pushing
          this entirely tcp-based.
          
          Kent W:  As a contributor. What is the motivation for your UDP-based
          draft? Is it the reliability? I don't think that was it, so much as the
          desire to enable the line cards to send the UDP packets having the same
          source IP. For the other draft that you are about to present, the multiple
          stream originators, the desire is to enable that distributed source.
          
          Rob S:  So we've looked at this. I think that there's a model whereby you
          have a distributed system that has different components that can each have
          TCP. I think you're going to end up going that way if you ever care about
          reliability. If you say this is a hundred percent unreliable then I think
          you can kind of talk yourself into this UDP model but if you say I want
          distribution because it gives me more scalability (point to be proven as
          to whether that's really required) and then you you can still do TCP or
          it is a lightweight TCP protocol to the line cards and you're kind of
          inventing a new protocol here, as you pointed out. I would suggest for
          debug ability it's kind of a challenge if you have N producers that are
          all producing with the same source IP. We have challenges around being
          able to know whether you're actually in synchronization with that system
          if you've got N different producers and one line card stops producing
          data. You don't really know you've got no metric to be able to alert on
          say if this source isn't sending data anymore.
          
          Kent W: I think we're jumping into the next draft but I think the idea
          with that draft is the is that the configuration model would allow
          you to configure the UDP to the system and then the system implicitly
          distributes to line cards and tells each of them. But if you do were to
          do TCP you'd have to be explicit the configuration model would actually
          have to configure the IP address for that line card.
          
          Rob S: Yeah I'm suggesting a bit of configuration pain is better for
          the overall system.
          
          Henk B: Again if you are expecting to have congestions, you will have UDP
          datagram loss. That is a fundamental decision you have to make. Do you
          expect congestion with your streams or not. If you have that expectation,
          which I think is likely then you have to deal with retransmits and for
          that you should not reinvent yet another (reliable)transmission mechanism
          for UDP for every draft in the IETF. There is a good template for that
          in COAP where
          a reliable message every thousands message is sent, and you can then see
          how many messages you lost and that window can be retransmitted. It's
          a little bit like TCP but light weight. I call it a reliable COAP.
          That is an alternative. My suggestion is to approach the research group
          that is meeting here. There are two drafts in development which talks
          about how to associate data items that are in series and the problem of
          retransmits is discussed. If you have a problem that is not solved in
          general and you want to solve it with your draft.
          
          Tianran Z: I know of COAP used for IoT. Do you have an example for COAP
          application that is used for routers?
          
          ???: In the DDoS protection working group we use a kind of our basic
          transport protocol for the for the scenario, if you want to look.
          
          Tianran Z: I would like to see.
          
          Henk B: Just because something that was initially intended to be used
          in the constrained environments doesn't make it unfeasible for the rest
          of the Internet
          
          Kent W: As a contributor. Just a quick follow up on the discussion about
          retransmissions. When I first saw the message ID with the UDP I never
          thought that it would be for the purpose of knowing when to request for
          a retransmission. I only thought it would be used for ordering of the
          packets received by the receiver because UDP doesn't guarantee order
          delivery. And for detecting gaps now when a message was dropped. Not
          to request but to know that you lost a message. I never thought  that
          there would be a desire to try to build reliability on top of a UDP
          based protocol.
          
          Rob S: I think that that's a interesting operational like mode of
          operation. Like I said before with SNMP.I can poll the device and  know
          I get some stats back. Maybe there's some loss in them but I know at what
          interval I'm polling in. In this (UDP) mode where there's no reliability,
          if the device just shuts up you can't tell because you didn't get a
          sequence number to tell. It becomes quite operationally difficult to
          not assume reliability when there isn't a guarantee that the thing at
          the other end is sending data. I mean we tried this. We looked at it
          as the preferred way to start with, and this a long thing with internal
          collector deployment things about how  you know when it can reconnect,
          about how you deal with redundancy between collectors and those kind
          of things. I think it just makes for more and more challenges. That is
          kind of why I think the draft could do with some discussion of like how
          you actually operate the system like this.
          
          Jeff T: Just to interrupt comments. I work for a company, where we use
          streaming extensively from thousands of devices. Spend quite some time
          looking at UDP and TCP and I also discussed with potential customers. UDP
          was a no no. It has to be reliable otherwise you need to build additional
          layer to ensure it transmissions reliability.
          
          BenoƮt C: It interesting because we were discussing the same thing
          that we have been discussing for IPFIX for 10 years. The message ID in
          IPFIX was just to know about the order and just to know that you've been
          losing flow records. The point is that for IPFIX it works fine because
          of accounting. If you lose one packet, big deal. BTW, you expect a
          router to keep information records. I think the key point is that if
          you rely on this mechanism for an event like Rob was mentioning, it
          must be reliable. If you just going to sending monitoring information,
          you can use UDP.
          
          Rob S: Just to add to Benoit's point, It's fine I think, fundamentally for
          SFlow or IPFIX to have a different nature, because we know that there's
          N flows on the device where know that for n packets going through the
          device we know that we only expect a sample of them therefore losing
          one... I don't know of any system that's built to say with SFlow or
          IPFIX where with one flow sampling I'm going to guarantee that I'll
          get every one of them.  Whereas with telemetry data if we're building
          systems that now split the control plane across the device and off the
          device then we need it to be reliable just like you would need some of
          this data internally to the system like links going up and down to be
          reliable for routing protocols.
          
          Benoit C: Again, it depends what you call telemetry. If it's telemetry I
          mean to push high frequency all information from it's like IPFIX,  your
          sending flow records; even if they're not flow whatever, if you condense
          everything in your telemetry so it becomes an event you can't miss it.
          
          Alex C: Some application may require reliability while others where you're
          saying you lose one record it's not a big deal. Another question is how
          it is being used if you use this for periodic updates you know basically
          that you are expecting updates for every period already. If you have
          a period missing you would basically infer some of those things. I do
          agree actually that we need to have the discussion of these operational
          things and the trade-offs. At the same time I think nobody is saying that
          this is the be-all end-all transport for all particular use cases. This
          is one use case for certain scenarios where basically those operational
          scenarios that you described would be applicable.
          
          Rob S: Again just a response that. I think there's a few challenges
          with those assumptions. As soon as you say oh I'll stream everything
          periodically you're going to significantly increase the data that comes
          from the device and by hundreds and hundreds of times. Actually it makes
          this system scaleability problem worse. You want to only send things when
          they change. It gives you a significant advantage for large data sets. It
          also gives you a significant advantage for interfaces that are down,
          on a systems with radix of a thousand or so. Which is kind of common in
          today's networks. As soon as you say I'll send things periodically you're
          going to end up with these scalability concerns and you probably are now
          having to deal with worse scale on the device of your UDP periodic than
          you would be the cost of doing TCP for reliable. Other problem about
          periodic is that you don't actually know what theycollect or what you
          meant to receive. If a whole line card stops sending did it get removed
          from the system. It'sactually hugely difficult without lots and lots of
          other accounting to know what you should have been sent during
          that period. The third thing I shouldn't say is we're inventing a new
          protocol here to send telemetry data let's notinvent one that we know
          is flawed and only works in like a small number of cases because that
          is just going to complicate things.
          
          Alex C: When you subscribe the subscriptions both can support either
          use case. A user will decide whether they
          happy with periodic or whether on change is actually more applicable
          for their particular application. If you want to have a continuous and
          telemetry to do some kind of whatever whereas statistics trend line
          analysis for your application.  Not every use case requires on-change.
          
          Rob S: That's true. But I guess the point is that there's some data and
          an underlying  you do want to sample. This doesn't mean that the system
          can't support sending data periodically. There is a significant amount
          of data that won't need to be sent. I would encourage people to go and
          look at is to look at the data that is being pulled from devices. This
          is what we've done and then look at what the proportion of it that is
          event based versus periodic and do some calculation, as to what the
          data volume is. The scaling analysis in the gRPC based telemetry where
          we can show you know significant reductions based on this. Even though
          that some data is being sampled and sent periodically because it needs
          to be sampled like that from underlying hardware sources.
          
          Reshad R: Related questions. I remember there's previous message ID and
          that's how the receiver knows that so many messages have been lost. But
          if you're not receiving any messages how do you know that there's messages
          loss. re we going to do a UDP keepalive draft?
          
          Tianran Z: Weed this keepalive information.I don't know about this this
          is something we need to consider. We also have in the other draft some
          mechanism to solve this problem.
          
          Benoit C: In IPFIX we solve that with SCTP and get a perfect solution
          where actually you would have a stream which is reliable, unreliable or
          partially reliable. Depending what you're sending it is monitoring it
          would be unreliable you miss a couple of information, fine no big deal,
          you might be  partially reliable you do your best or reliable if it's
          event based. That's how we solve in IPFIX.  However with SCTP  didn't
          pick up and it's an issue with line cards. It is an operational issue and
          I think Rob mentioned that how do you identify your device a router is
          like one IP address or it's a sum of IP addresses to one per line cards.
          
          Tianran Z: Sorry I do not understand your question.
          
          Benoit C: It's not a question it's a duration that we've been looking
          at these issues. Ten years ago it becomes like more an operational
          issue. What do you want to solve and then you will have the solution
          for your protocol.
          
          Mahesh M: Speaking as a contributor.I think the message that I'm
          getting from the working group is you probably need to look at the
          data set to decide if you need a UDP-based channel or if you if you
          need reliability. If you are gonna build reliability into this with
          the sequence number why not just use TCP. As Benoit mentioned if it's
          monitoring data that you're looking at losing a few packets is not a big
          deal. But if you're looking at event you can't afford to lose it. What
          is the cost of doing that?
          
          Kent W:  One other comments as a contributor. Just one more comment.I
          heard or learned last night that the I guess it's in the COAP working
          group. There's a an effort that's been going on for a couple years now
          to do a COAP based broker pub/sub mechanism, but we should learn more
          about it and and see how it might be usable in this space as well.
          
          Non-Chartered items:
             Tianran Zhou (10 min)
             Subscription to Multiple Stream Originators
             https://tools.ietf.org/html/draft-zhou-netconf-multi-stream-originators-03
          
          (Tianran presenting)
          
          Kent W: Kent, as a contributor. can you go back to your previous slide,
          the one that had the diagram and had the red box that said "out-of-scope".
          Yes. Why is it out of scope?
          
          Tianran Z: as I mentioned in some instance, like the carrier routers,
          this is kind of the internal implementation, so I think must between
          the mainboard and the line cards.
          
          Kent W: right, okay, I think that this being out-of-scope is dependent
          on the conclusion of some of the operational requirements that we were
          discussing a moment ago.  Going back to Rob's comment from before, a
          little more configuration complexity may be warranted if we were, for
          instance, needing to configure TCP instead of UDP, in which case it would
          have to be in scope, because you'd have to be configuring what is the TCP
          interface, at least, for each line card to use, so I think what you're
          saying is it's out-of-scope here because the expectation is that, from
          using UDP, the routing engine can internally communicate the line cards
          
          Tianran Z: yes, okay
          
          Kent W: so there's that assumption, which i think is not is still TBD,
          is what I'm thinking
          
          Tianran Z: okay, but my concern is this part is a little bit complex,
          and may vary from implementations, so I'm not sure if it can converge
          in this draft. that's my concern.
          
          Mahesh J: Mahesh, as a contributor. okay, if you could go down a couple
          of slides, to where you talked about being able to reliably indicate a
          change in the subscription [the Subscriptions State Change Notifications
          slide], it says "all the subscription state change notification MUST
          be delivered.  Now, when you say "must", that means you're thinking
          about a reliable channel here for delivering that change in notification?
          
          Tianran Z: that's an interesting question. from the message layer, no,
          we do not consider it must be a reliable channel so maybe in UPD case,
          maybe we need to consider
          
          Alex C: this is Alex.  Just to add on or respond to that, I would not
          mix this with the earlier transport discussion.  The goal certainly is
          for the subscriptions, per se, has always been to basically make this,
          well, make the fundamental mechanism reliable, so that you can avoid
          having to poll things.  Now obviously, with this case, if you have a new
          component subscription that was added, or something that was removed,
          that is an event that you would needs to know or that you would want to
          know, certainly as a collector, therefore, basically, there's something
          that needs to be notified.  We've had some this some internal discussion
          whether it should be subscription-modified or whether there should
          be another type of notification, but either way, it is an event, and
          suppose it should be foreseen as part of the control channel.  Now, if
          you want to have making this busy for the for the control part of this.
          Now, for the actually telemetry stream, whether this is reliable or not,
          that's a separate issue, I would separate those discussions, but this
          one would be needed for reliable control channel, so to speak.
          
          Kent W: Kent, as a contributor, maybe as a chair, I don't know.
          The motivation for the working group, particularly when adopting the
          previous draft, as this draft is not yet adopted, but the idea that this
          draft is discussing was one that was presented at the time that we adopted
          the previous draft, which was the goal to support line cards to be able to
          send messages themselves directly, as opposed to trying to forward them
          to the routing engine, in order to for the routing engine to send them
          because, from experience, we know that the internal backplane switching
          fabric does not have enough bandwidth to transmit that much data,
          it's just not possible, the line cards have to be able to send directly
          themselves and, in fact, you know things like encryption, actually, it's
          probably problematic, and this goes to the operational requirements,
          are we actually thinking that for these very high logging scenarios,
          would the destination be an internal receiver, something that is on the
          LAN, something that itself would collect the logs in unencrypted form,
          do deduplication and analysis and compression and, perhaps, even itself
          could convert it to binary.  we don't really need binary in the LAN,
          we need binary on the WAN.  When we talked to customers, their costs for
          bandwidth over WAN is expensive, that's when they care, they don't care
          about the bandwidth on the LAN.   I think we need to understand what
          are the operational requirements, and what is the problem we're trying
          to solve, maybe some of this would become more clear.  I still strongly
          support the ability for sending logs out the line cards directly, as
          that's important problem solve, but the motivation for it being binary,
          and the motivation for it being UDP even, I think we should go back to
          asking if that's really important to solving the problem here.
          
          Mikael A.: Mikael Abrahamsson.  it struck me, the whole thing about
          line cards, and it being on box, I have the use case where I might
          have a Wi-Fi access point that basically doesn't have an IP address,
          so i'm speaking some kind of protocol to it or I don't want it to send
          anything, it's going through me, but it's a different device, and I want
          to like expose this in YANG so that the Wi-Fi ,it's like I'm the NETCONF
          server, but I'm configuring the guy over there, and I still want exposed
          to my NMS that these are two different devices.  Isn't that kind of the
          same problem?  don't you  want a more generic approach in how to expose
          this in YANG and NETCONF?  Because isn't this the same thing and like if
          it's a line card that is like its own computer sitting in the chassis,
          or if it's something else, like you're acting on behalf of that guy, I
          mean, I've seen many different scenarios where you need the same concept,
          so can we make it more general?
          
          Tianran Z: yeah, I think you provide an interesting use case, maybe
          similar to this IoT use case, and so we actually I think this framework
          is like a generic one
          
          Mikael A: my problem is that we're talking about what's in the UDP
          packets that is streaming telemetry
          
          Tianran Z: that's another draft
          
          Mikael A: yes, I know, but it's like this here is talking about
          subscription, but isn't this just configuration it's just it's not
          for me specifically, I'm doing this for another guy, that he's like
          near, I'm controlling him, so it's not me.  Don't we need like a more
          generic approach? and don't we need to talk about what the configuration
          is instead? isn't it like how do we do this generically? isn't that
          what we should be discussing?  we're talking about subscriptions here,
          and how to talk to that guy, or what to actually configure, but don't
          we actually need just a best common practice for exposing this entire
          concept of different devices managed through one NETCONF server?
          
          Kent W: I have a response for this, but I think Rob does as well.
          It is for this question Rob?  yes, please go ahead.
          
          Rob S:  Rob. I completely agree. so in the GRPC space, for both telemetry
          and configuration, we added a generic way to be able to make a path be
          addressable to a certain target that they entered the managing entity
          deals with it, and it's used for both telemetry and for configuration. I
          think having case-by-case solutions is not optimal, I think having
          a single way that you can say there is this management agent that is
          responsible for this other domain is super useful for many many cases
          
          Kent W: okay, so up-leveling the problem space, great!  My closing
          thought is, not it's not really on this draft, but kind of to the other
          one as well.  Currently, the YANG Push and Friends drafts are almost
          out of Last Call and into Shepherd write-up.  As a working group,
          we've only defined support for dynamic subscriptions, we do not yet
          have any support for configured subscriptions.  This draft is on the
          path towards enabling support for enabling configured subscriptions,
          but I think that there may be other paths that we could explore that
          would get us there faster, specifically HTTP-based push mechanism is
          something along these lines. I don't know, maybe somebody would be
          interested in putting together an ID to propose another notif draft.
          The nice thing about the way that we've constructed, or deconstructed,
          the the YANG Push notification drafts is that we do have all these notif
          mechanisms, so it's like a Swiss Army knife.  It's great that we have
          a UDP-based mechanism available, certain deployments will use it for
          their use cases, others won't because it doesn't match their use cases.
          So I think we should also consider other notif mechanisms that would
          enable us to have configured subscriptions.
          
          Tianran Z: I think the idea is become mature, and the the solution
          and the scope is kind of clear, so I am I'm wondering if I can ask the
          working group to adopt this document doc but
          
          Mahesh J: I think before we get to the point of adoption I, we probably
          need to address the question that Mikael raised, which is do we need to
          upscale this problem definition before we get to the question of adoption.
          Maybe you should consider all that first.
          
          Tianran Z: okay, thanks.
          
             Qin Wu (10 min)
             Inline Action Capability for NETCONF
             https://tools.ietf.org/html/draft-zheng-netconf-inline-action-capability-01
          
          (Qin presenting)
          
          Mikael A: Mikael Abrahamsson.  I don't know, I haven't read the draft,
          if I understood correctly, is this only for when the will you change
          the configuration but the end result operational state does not change?
          so you're splitting the range into two, but the effective configuration
          on the line card, or whatever, never changes? is it only for that type
          of configuration, or is it also it like you split it and then you delete
          one and you do that in one operation?
          
          Qin W: uh
          
          Mikael A: okay so you had the example one-to-five and six-to-ten, you
          can merge you can merge those and into one record
          
          Qin W:  yeah
          
          Mikael A: if this changes, nothing in in real life changes, I mean, the
          line card's hardware doesn't get reprogrammed by that operation, you're
          changing the configuration but the state of the device doesn't change,
          is it only for that type of operation or is it also for deleting one of
          the VLANs in the middle (i.e., it's for both)?
          
          Qin W: yeah, I think we right now we really support both actually. the
          motivation we can have the merge several tag into one allow you to do
          better NETCONF query, you know, but actually we also support also, in
          some cases, you may need to delete a some of the value from the VLAN tag
          ranges, so we provide such capability in some cases, actually we need to
          support both and and then we can actually, you know optimize the actual,
          by merge several range into the one range.
          
          Mikael A: so yeah do you see this as an optimization in number of
          transactions, or is it processing power on the server, or on the client?
          
          Qin W: we don't want you add overhead to the client, actually maybe
          you just need a one transaction, but this is actually transaction you
          know you send a request to the server actually we are actually you know
          using some existing config template to merge you the range into the one,
          actually all happened in the server side, actually you reduce overhead
          on the client side.
          
          Rob W: Rob Wilton, Cisco.  So I'm still not convinced that this particular
          use case is actually a problem. I'm not convinced there's scale issue
          in terms of configuration here.  Even if you split out the number of
          VLANs over hundreds of interfaces, then I still think the amount of
          config is gonna be 10k or 20k.  Something that would be small in terms
          of the master consuming it. so I'm not convinced by that aspect, that
          it is a problem here to be solved.  In terms of if you want to do more
          advanced VLAN operations, ie breaking tags or inserting tags into into
          particular strings, for example then, yes, that's okay, but I think
          of those maybe just be rpcs potentially on a VLAN model is how I would
          implement those. So then, coming back to the general inline actions, I'm
          still sort of conflicted is where this is a good thing to do, I think this
          more generally is about transactions, and saying I want to give a sequence
          of events to the server as one transaction, after it perform all of these
          things and either succeed or fail, so that's the the guide I'd look at for
          this problem, rather than just adding actions into configuration requests,
          but even with that I still question whether that's a useful thing or not,
          I'm not convinced this is a problem to be solved at this stage.
          
          Qin W: but a you say that you don't know where it came, actually, we
          use a edit-config as an example, if you modify the VLAN tag around the
          merge and a split, actually, you may need, because you may operate on
          some list that leads to the key index cannot be deleted, so you have
          several disparate range, so you need to delete several disparate range
          first and then you create then a new range with larger range, that's a
          difference with NETCONF, we want to address this.
          
          Rob W: okay, so that's a different problem, potentially, to solve and
          I think, again, we need to look at the data model that you're talking
          about, so the one I've been through, IETF, doesn't have that issue, as
          the VLANs are just information on a subinterface, so it's not actually
          something where you have this concern, it's just manipulating that
          string, and the ability of a client to mangle VLAN IDs into string is
          it's probably bordering on trivial to do, I mean, it's that an easy of
          a thing to solve. so it might be that your data model is different and
          then hence there's a different requirement coming from that, so I'm happy
          to look at what your specific data model is to see what the changes are,
          what's different
          
          Qin W: yeah, we do have such a model. we can show you offline about this
          
          Rob W: okay.
          
          Mahesh J:  Mahesh, as a contributor, adding to Robert's concern for
          why we might need this, is the problem specifically for case where
          we're talking about a range, like we're trying to specify whether we're
          trying to expand it or break it up? is that the specific use case that
          we looking a solution for?
          
          Qin W: yeah the case we give is maybe kinda limited, but we really want
          to generalize this idea.  The general idea, actually, we can provide
          the operation for NETCONF for protocols so you can actually improve the
          NETCONF efficiency, and here we gave the example the VLAN tag range,
          the value is interval type, maybe there's some other case where the
          value actually is a string type, and you also do this merge operation
          
          Mahesh J: what other use case would you have?
          
          Qin W: in some case where you haven't bring up actually, so you may
          transpose some learned configuration into static configuration or dynamic
          configuration data.  By the way, we only talk about these cases, but we
          have some other cases with haven't bring up
          
          Mahesh J: yeah, okay, I think if you bring those cases, it might help
          the workgroup appreciate and understand the problem a little better.
          
          Qin W: we can do that
          
          Kent W: Kent, as a contributor I agree with Rob. I don't understand the
          motivation for wanting to solve this. I guess scalability and efficiency,
          but does it really get to the level of concern that we need to solve
          the problem, and that the solution seems like a point solution, and
          the fact that it's NETCONF-only is concerning, I request that we have
          a solution that works for both NETCONF and RESTCONF.  If it is truly
          a transaction-like mechanism, I think that's what Rob was saying, then
          maybe enabling YANG Patch (note: Kent accidentally said "push") to be
          used by NETCONF would be another way of enabling something like this.
          Then going to Mahesh's comment right here, if it's truly just for ranges,
          then it seems like maybe we'd want to have a datatype, a "typedef range",
          and then this operation would be available whenever that typedef was
          in play.  it's just unclear at the moment, I guess going to Mahesh's
          last point, more examples and data analysis is needed, it's currently
          unclear why we would want to pursue this.
          
          Qin W: yeah I think the intension is that we provide such kind of
          solution, hopefully we can generalize this so we can not only apply to
          the YANG Push, but also can apply to the existing NETCONF operation,
          so not limit to the existing NETCONF protocol operation. we haven't
          investigated how these can be applied to YANG Push either, if that's a
          case, we think maybe first we need to clarify the problem space first.
          I think we should look further to apply to that YANG Push.
          
          Kent W: you're saying "YANG Push", but you mean to say "YANG Patch",
          right?
          
          Qin W: YANG Push, not a YANG Patch
          
          Kent W: how is it related to YANG Push?
          
          Qin W: oh, you meant, you mention...
          
          Kent W: YANG Patch
          
          Qin W: patch, right patch, oh sorry I miss
          
          Kent W: no worries
          
          



Generated from PyHt script /wg/netconf/minutes.pyht Latest update: 24 Oct 2012 16:51 GMT -