-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
One protocol? #34
Comments
I've been thinking about this over the weekend, and there's a lot of reasons how we ended up with RTMP, HLS, and DASH being used both for input and output but really they just boil down to patterns where systems appear simpler to reason with if there's only one protocol in use. In a way it kind of feels inevitable that if we only focus on creating a protocol that only facilitates either input or output, someone will end up using it for the inverse. What difference in designs do you imagine might happen should we split and develop separately, or only focus on one part? |
I'm happy to propose text on this, as soon as I have some idea what it should say 🧙 ... |
I would say they're addressed by the same protocol because, for the aspects of the protocol that are technically interesting, the direction of transfer does not particularly matter; at the end of the day, it is a media transfer from an endpoint to an endpoint. The latency target, by the contract, would lead us to different design choices. |
Hi, @vasilvv, that makes sense. So I wonder - we've been talking in various places in and outside of the IETF about "low latency" for various values of the word "low", and someone somewhere finally said "actually, no one wants HIGH latency". 😉 Tagging @gloinul, but I don't have a Github account name for Yang Yin - I'll point her to this issue, to keep her in the loop. So my question to our happy group is, do people think that it would be OK to say that we're shooting for one protocol, because we share your opinion, unless differences in latency targets would prevent us from using the same protocol? If so, I know what to say in a PR, and it won't take long for me to propose text. |
I largely agree with @vasilvv here, with one exception: the handling over reliability and resiliency is the only difference. In certain ingest scenarios having multiple, separate streams (sometimes over separate physical connections) is required. This is not something really needed in playback, if that fails in connectivity one thing goes dark; if the input fails, everyone goes dark. Provided we ensure this detail isn't overlooked, I think we can have one protocol. |
So, still catching up on this draft. So from a point of ignorance what is actually written in the document. I think @vasilvv has a point in that if the problems looks similar enough one want one solution. At the same I think it might be best to not jump to a conclusion just yet. Build up the arguments for what is same and what in usage or requirements are different. Then if they make sense to combine in one protocol one do it. If there are hard to combine when actually doing protocol design then it might be multiple protocol. |
Adding @ushadow - thanks, @fiestajetsam. I would never have found that! |
@afrind and @gloinul - after looking more closely at the current editor's version, I'm not seeing text that sets an expectation that there will be One Protocol To Rule Them Use Cases All. Given that this might be an important consideration for the requirements section, which we haven't started to update yet, I suspect this it's better for you two to say something about that in the chair slides - basically, that we haven't drilled down far enough to know whether One Protocol is a realistic requirement or not. |
So I think there's a few missing things that really make a case for a single protocol. We need to be thinking about a video pipeline from broadcaster to viewer.
|
@kixelated in television the broadcast chain is highly heterogeneous with regards to protocols, sources, and movements at the borders. Once a media stream is in the chain past a boundary (typically MCR) it's normalised down into uncompressed form in a limited number of muxings, but then compressed and packaged in a variety for syndication for all outputs both for distribution but also for monitoring. Thus, it's infeasible to consider "glass to glass" except in incredibly simple cases and transmuxing is these chains is also unavoidable, as such there will never be "one to rule them all". Realistically the work we do here should be complimentary to mixed deployments in the right ways. |
That is the natural and right way of doing this. Most believe latency is a tradeoff against quality and that is often true. With relaxed latency requirements, there will be more tools available to improve the quality (such as more efficient encoding, using retransmissions, FEC or doing multi-path transport, interleaving). With a different latency target, one should be able to enable/disable what tools he will be using. So, by definition, this could be one giant protocol that has all the bells and whistles that can be enabled on demand. Or, it could be two or more smaller protocols for target use cases. No need to make that decision up front. |
I'm leaving this discussion for reference during work on requirements, now that MOQ has been chartered. |
I tried to tackle this in the motivation section of the latest warp draft. |
@SpencerDawkins observes that if he's current on where we are now, we can simplify the description of Live Media in subsection 3.3. |
@SpencerDawkins and @fiestajetsam think there's enough text in this discussion to produce a PR, after IETF 117. |
@SpencerDawkins officially notices that it's "after IETF 117", so he should be producing a PR. 😜 |
@SpencerDawkins will produce a PR for the next draft revision. |
One thing that is not mentioned in this document is why there should be one protocol to cover the 3 selected use-cases. It's just listed as a requirement that the protocol can work in any direction.
I think we need to address why there are protocols exclusively for distribution (ex. HLS, DASH) and contribution (ex. RTMP, SRT). The simplest reason is that distribution needs HTTP CDN support but that's a loaded requirement and rules out a huge number of designs. While I certainly want one protocol for all use-cases (ex. WebRTC and even RTMP back in the day), there needs to be more careful consideration as to how that would work.
The text was updated successfully, but these errors were encountered: