Skip to content

Conversation

@crutcher
Copy link
Contributor

This is a first-pass sketch at threading introspection operations for building a B::Device => capability mapping.

This is mostly trivial, but I don't know what to do about the Router case.

This is a first-pass sketch at threading introspection operations
for building a B::Device => capability mapping.

This is mostly trivial, but I don't know what to do about the Router
case.
@nathanielsimard
Copy link
Member

I think it would be better to have a B::is_dtype_supported(device, dtype) -> bool. It avoids an allocation and it assigns the dtype to the device. A backend can have multiple devices with different dtype support.

Copy link
Member

@laggui laggui left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[cross-posting my earlier discord comment here]

Hmmm I don't think this should be defined for the FloatTensorOps trait. Also, in line with the original tracking issue, I think we should have something like:

let is_supported = B::supports_dtype(&device, dtype);

And if we really want a matrix of supported dtypes, would need something like

pub struct BackendDTypes {
    pub float: Vec<DType>,
    pub int: Vec<DType>,
    pub bool: Vec<DType>,
}

fn supported_dtypes(&device) -> BackendDTypes;

but not sure if that is desirable.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants