Skip to content

Synchronize audio/video with data in WebRTC #133

Open
@tidoust

Description

@tidoust

Some potential use cases for WebRTC presented during last WebRTC WG F2F would require a mechanism to synchronize audio/video with data.

One possible approach would be to use Real-Time Text to timestamp data with the audio/video, but the question of the synchronization of that stream with the audio/video streams remains to some extent (depending on the use case and precision required).

Some questions to handle:

  1. What is the browser's role in the synchronization? Should it sync things up on its own? Is it enough to expose some info and knobs such as the relationship to the Performance.now clock and the average latency of the processing pipeline, as done in the Web Audio API?
  2. If processing of the data needs to be done by the app, what does it mean to synchronize the streams? I.e. would triggering an event at the right time be enough when high-precision is required? Or should the app rather have a way to monitor the information one way or the other through a worklet, or a processing mechanism close to the rendering pipeline such as requestAnimationFrame?

It is interesting to see the relationship between this need and similar needs expressed over the years by media companies. For instance, the Timing on the Web effort (#46) would help solve the first point, and was not triggered by WebRTC use cases. Also the second point was recently discussed at length in the Media & Entertainment IG (see w3c/media-and-entertainment#4).

Metadata

Metadata

Assignees

Labels

Type

No type

Projects

Status

Investigation

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions