Skip to content

[wg/media] Media Working Group Charter #504

Open
@tidoust

Description

@tidoust

New charter proposal, reviewers please take note.

Charter Review

This is an existing WG recharter

Communities suggested for outreach

None in particular. Through group participation, agenda topics, and joint meetings, the Media WG has regular exchanges with the WebRTC Working Group, the Timed Text Working Group and the Media & Entertainment Interest Group (which, in turn, exchanges on media topics with a number of external organizations).

Known or potential areas of concern

Where would charter proponents like to see issues raised? On the w3c/charter-media-wg issue tracker

Anything else we should think about as we review?

The draft charter mentions two additional issues in the description of the work planned on the Encrypted Media Extensions deliverable. These are more "maintenance" than "new features" but it seemed worth expliciting given that the scope of work on EME remains restricted.

The WebCodecs section also describes planned work to give applications the ability to use an HTMLMediaElement and buffering mechanisms defined in Media Source Extensions (MSE) to achieve adaptive playback of media that originates from WebCodecs structures, including creating a way to represent encrypted audio/video chunks within WebCodecs so that playback can leverage EME.

By itself, the possibility to represent encrypted audio/video chunks in WebCodecs does not extend EME in any way (and EME will not need to be updated at all for that). In a typical scenario, an encrypted encoded media chunk (EncodedAudioChunk or EncodedVideoChunk) will be passed to MSE and connected to a <video> element. Playback of the encrypted content will then handled by EME, as for any other encrypted content that reaches a <video> element.

By definition, encrypted media chunks cannot be decoded as-is into a raw VideoFrame, although the spec does not preclude creating an "opaque" VideoFrame. If implementations supported this, such a VideoFrame could perhaps in turn perhaps be used in other scenarios. Here is an analysis of these other scenarios to help reviewers grasp the ins and outs of the proposal:

  1. A VideoFrame can be injected into a <video> element through a VideoTrackGenerator. With encrypted chunks, this would achieve the same thing as connecting it to a <video> through MSE+EME (apps would have to handle the buffering themselves, but that's orthogonal to encryption).
  2. A VideoFrame can be directly injected into a <canvas> element. Injecting an encrypted VideoFrame into a <canvas> will not work out of the box since that would de facto reveal the bytes. The concept of a "tainted canvas", used for cross-origin content, could perhaps be extended for encrypted VideoFrames. For that to work, EME would also need to be integrated to <canvas>. If all that happens, this would make it easier to apply content protection to still images, but this is actually already doable in practice, see discussion in Support for content protection webcodecs#41 (comment)
  3. A VideoFrame can be imported as external texture into WebGL and WebGPU pipelines. As with <canvas>, injecting an encrypted VideoFrame will simply not work today because these pipelines do not have provisions for encrypted content, and would require significant work on the APIs and implementations. Should WebGL and WebGPU decide to add support for content protected textures, they would likely want to leverage encrypted structures that can more directly be used as textures than VideoFrame in any case.

Cc @chrisn, @marcoscaceres.

Metadata

Metadata

Assignees

Type

No type

Projects

Status

Chartering

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions