Skip to content

Conversation

@dvrogozh
Copy link
Collaborator

@dvrogozh dvrogozh commented Jan 6, 2026

This commit adds support of out-of-tree plugins autoloading using python package entry point specification:

Out-of-tree plugins must register entry points as torchcodec.backends:

[project.entry-points.'torchcodec.backends']
device_backend = 'torchcodec_plugin:load_plugin'

Torchcodec will automatically load plugins if discovered. Loading can be explicitly suppressed with TORCHCODEC_DEVICE_BACKEND_AUTOLOAD=1 environment variable.

The same approach is being used to load PyTorch device backends. See:

Here is a PR for XPU plugin which will enable autoloading if torchcodec PR will get merged:

CC: @scotts @NicolasHug

This commit adds support of out-of-tree plugins autoloading using
python package entry point specification:
* https://packaging.python.org/en/latest/specifications/entry-points/#entry-points
* https://packaging.python.org/en/latest/guides/creating-and-discovering-plugins/#using-package-metadata

Out-of-tree plugins must register entry points as `torchcodec.backends`:
```
[project.entry-points.'torchcodec.backends']
device_backend = 'torchcodec_plugin:load_plugin'
```

Torchcodec will automatically load plugins if discovered. Loading can be
explicitly suppressed with `TORCHCODEC_DEVICE_BACKEND_AUTOLOAD=1` environment
variable.

The same approach is being used to load PyTorch device backends. See:
* pytorch/pytorch#127074

Signed-off-by: Dmitry Rogozhkin <[email protected]>
@meta-cla meta-cla bot added the CLA Signed This label is managed by the Meta Open Source bot. label Jan 6, 2026
dvrogozh added a commit to dvrogozh/torchcodec-xpu that referenced this pull request Jan 6, 2026
@NicolasHug
Copy link
Contributor

Thanks for the PR @dvrogozh . Just to make sure I understand correctly, the main goal of this PR is to go from:

import torchcodec
import some_torchcodec_plugin

to:

import torchcodec  # automatically loads some_torchcodec_plugin if it was installed

Is my understanding correct?

@dvrogozh
Copy link
Collaborator Author

@NicolasHug, yes, your understanding is correct.

@NicolasHug
Copy link
Contributor

Thanks for confirming @dvrogozh .

I'm still in the process to form an opinion on this, so please bear with me. My first reaction was pretty much the same as Ed's (pytorch/pytorch#122468 (comment)): an extra import isn't the end of the world.

I get the argument that some scripts are device agnostic and that the device is passed as a CLI parameter, in which case having to do an extra import is a problem. Whether that's really an issue in practice for torchcodec, I don't know yet.

Quick Qs which will help me form a better opinion:

  1. what's the current way for users to use the intel TorchCodec extension? They'd have to pass VideoDecoder(... device="xpu") or e.g. device=torch.xpu.device(0), right?
  2. correct me if I'm wrong: if a user is leaving the device to its default, i.e. VideoDecoder(..., device=None), then the existence of the XPU TorchCodec extension has zero effect, right (whether it's imported manually or automatically through the plugin mechanism)?

Basically, I just want to make sure that users will be using a new out-of-tree backend only when they explicitly request it, either by passing device=SomeXPUDevice explicitly, or by passing a CLI arg like device=xpu:0. What I'm trying to avoid are non-explicit behaviors.

@dvrogozh
Copy link
Collaborator Author

what's the current way for users to use the intel TorchCodec extension? They'd have to pass VideoDecoder(... device="xpu") or e.g. device=torch.xpu.device(0), right?

Yes, that is right. Users have to pass the device explicitly. For example, that's what Huggingface Transformers are doing:
https://github.com/huggingface/transformers/blob/de306e8e14672dd8392b4bd344054a6a18de8613/src/transformers/video_utils.py#L601

correct me if I'm wrong: if a user is leaving the device to its default, i.e. VideoDecoder(..., device=None), then the existence of the XPU TorchCodec extension has zero effect, right (whether it's imported manually or automatically through the plugin mechanism)?

As of today TorchCodec handles device=None by calling torch.get_default_device() which by default is set to cpu. Thus, by default registering xpu with explicit import or implicitly will be unnoticed and have zero effect.

if device is None:
device = str(torch.get_default_device())

User can explicitly override the default device by calling torch.get_default_device("xpu") - after that TorchCodec will start to use xpu registered device (if it was registered implicitly or explicitly with TorchCodec).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Meta Open Source bot.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants