Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

One protocol? #34

Open
kixelated opened this issue Feb 17, 2022 · 17 comments
Open

One protocol? #34

kixelated opened this issue Feb 17, 2022 · 17 comments
Assignees
Labels
IETF 118 Target PR text for IETF 118 requirement Impacts Requirements section

Comments

@kixelated
Copy link

kixelated commented Feb 17, 2022

One thing that is not mentioned in this document is why there should be one protocol to cover the 3 selected use-cases. It's just listed as a requirement that the protocol can work in any direction.

I think we need to address why there are protocols exclusively for distribution (ex. HLS, DASH) and contribution (ex. RTMP, SRT). The simplest reason is that distribution needs HTTP CDN support but that's a loaded requirement and rules out a huge number of designs. While I certainly want one protocol for all use-cases (ex. WebRTC and even RTMP back in the day), there needs to be more careful consideration as to how that would work.

@fiestajetsam
Copy link
Collaborator

I've been thinking about this over the weekend, and there's a lot of reasons how we ended up with RTMP, HLS, and DASH being used both for input and output but really they just boil down to patterns where systems appear simpler to reason with if there's only one protocol in use. In a way it kind of feels inevitable that if we only focus on creating a protocol that only facilitates either input or output, someone will end up using it for the inverse.

What difference in designs do you imagine might happen should we split and develop separately, or only focus on one part?

@SpencerDawkins
Copy link
Collaborator

I'm happy to propose text on this, as soon as I have some idea what it should say 🧙 ...

@vasilvv
Copy link

vasilvv commented Mar 1, 2022

I would say they're addressed by the same protocol because, for the aspects of the protocol that are technically interesting, the direction of transfer does not particularly matter; at the end of the day, it is a media transfer from an endpoint to an endpoint. The latency target, by the contract, would lead us to different design choices.

@SpencerDawkins
Copy link
Collaborator

Hi, @vasilvv, that makes sense. So I wonder - we've been talking in various places in and outside of the IETF about "low latency" for various values of the word "low", and someone somewhere finally said "actually, no one wants HIGH latency". 😉

Tagging @gloinul, but I don't have a Github account name for Yang Yin - I'll point her to this issue, to keep her in the loop.

So my question to our happy group is, do people think that it would be OK to say that we're shooting for one protocol, because we share your opinion, unless differences in latency targets would prevent us from using the same protocol?

If so, I know what to say in a PR, and it won't take long for me to propose text.

@fiestajetsam
Copy link
Collaborator

fiestajetsam commented Mar 1, 2022

I largely agree with @vasilvv here, with one exception: the handling over reliability and resiliency is the only difference. In certain ingest scenarios having multiple, separate streams (sometimes over separate physical connections) is required. This is not something really needed in playback, if that fails in connectivity one thing goes dark; if the input fails, everyone goes dark. Provided we ensure this detail isn't overlooked, I think we can have one protocol.

@gloinul
Copy link

gloinul commented Mar 1, 2022

So, still catching up on this draft. So from a point of ignorance what is actually written in the document. I think @vasilvv has a point in that if the problems looks similar enough one want one solution. At the same I think it might be best to not jump to a conclusion just yet. Build up the arguments for what is same and what in usage or requirements are different. Then if they make sense to combine in one protocol one do it. If there are hard to combine when actually doing protocol design then it might be multiple protocol.

@SpencerDawkins
Copy link
Collaborator

Adding @ushadow - thanks, @fiestajetsam. I would never have found that!

@SpencerDawkins
Copy link
Collaborator

@afrind and @gloinul - after looking more closely at the current editor's version, I'm not seeing text that sets an expectation that there will be One Protocol To Rule Them Use Cases All. Given that this might be an important consideration for the requirements section, which we haven't started to update yet, I suspect this it's better for you two to say something about that in the chair slides - basically, that we haven't drilled down far enough to know whether One Protocol is a realistic requirement or not.

@kixelated
Copy link
Author

kixelated commented Mar 3, 2022

So I think there's a few missing things that really make a case for a single protocol. We need to be thinking about a video pipeline from broadcaster to viewer.

  1. The latency approach needs to be glass-to-glass. Different protocols have different latency strategies and attempting to combine them often results in a worse user experience. For example, converting between RTMP and WebRTC causes problems in both directions because one is lossless while the other is lossy.

  2. Switching between protocols in the middle of a pipeline requires transmuxing and possibly transcoding. We want a single protocol so the data is not modified as it flows from broadcaster to viewer. For example, this is a problem when converting WebRTC ingest to HLS distribution.

@fiestajetsam
Copy link
Collaborator

@kixelated in television the broadcast chain is highly heterogeneous with regards to protocols, sources, and movements at the borders. Once a media stream is in the chain past a boundary (typically MCR) it's normalised down into uncompressed form in a limited number of muxings, but then compressed and packaged in a variety for syndication for all outputs both for distribution but also for monitoring. Thus, it's infeasible to consider "glass to glass" except in incredibly simple cases and transmuxing is these chains is also unavoidable, as such there will never be "one to rule them all".

Realistically the work we do here should be complimentary to mixed deployments in the right ways.

@acbegen
Copy link

acbegen commented Mar 12, 2022

So, still catching up on this draft. So from a point of ignorance what is actually written in the document. I think @vasilvv has a point in that if the problems looks similar enough one want one solution. At the same I think it might be best to not jump to a conclusion just yet. Build up the arguments for what is same and what in usage or requirements are different. Then if they make sense to combine in one protocol one do it. If there are hard to combine when actually doing protocol design then it might be multiple protocol.

That is the natural and right way of doing this. Most believe latency is a tradeoff against quality and that is often true. With relaxed latency requirements, there will be more tools available to improve the quality (such as more efficient encoding, using retransmissions, FEC or doing multi-path transport, interleaving). With a different latency target, one should be able to enable/disable what tools he will be using. So, by definition, this could be one giant protocol that has all the bells and whistles that can be enabled on demand. Or, it could be two or more smaller protocols for target use cases. No need to make that decision up front.

@fiestajetsam fiestajetsam added the requirement Impacts Requirements section label Apr 8, 2022
@SpencerDawkins
Copy link
Collaborator

I'm leaving this discussion for reference during work on requirements, now that MOQ has been chartered.

@kixelated
Copy link
Author

I tried to tackle this in the motivation section of the latest warp draft.

@SpencerDawkins SpencerDawkins added the Deferred for now We need other issues to be resolved first label Jan 27, 2023
@SpencerDawkins
Copy link
Collaborator

@SpencerDawkins observes that if he's current on where we are now, we can simplify the description of Live Media in subsection 3.3.

@SpencerDawkins SpencerDawkins added AfterNextMeeting Not ready to propose text until after next meeting and removed Deferred for now We need other issues to be resolved first labels Jun 27, 2023
@SpencerDawkins
Copy link
Collaborator

@SpencerDawkins and @fiestajetsam think there's enough text in this discussion to produce a PR, after IETF 117.

@SpencerDawkins SpencerDawkins added IETF 118 Target PR text for IETF 118 and removed AfterNextMeeting Not ready to propose text until after next meeting labels Sep 28, 2023
@SpencerDawkins
Copy link
Collaborator

@SpencerDawkins officially notices that it's "after IETF 117", so he should be producing a PR. 😜

@SpencerDawkins
Copy link
Collaborator

@SpencerDawkins will produce a PR for the next draft revision.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
IETF 118 Target PR text for IETF 118 requirement Impacts Requirements section
Projects
None yet
Development

No branches or pull requests

6 participants