Skip to content

Any plan to expand device support for bitsandbytes? #961

Closed
@SlightwindSec

Description

@SlightwindSec

Feature request

I am deeply appreciative of your work with bitsandbytes as it has tremendously helped enhance my workflow. Your efforts are much appreciated!
I have noticed that bitsandbytes is tightly linked with CUDA at both the C++ and Python end. This leads me to inquire if you plan to abstract "device" instead of "CUDA" in the future, akin to what PyTorch does, to enable bitsandbytes' support for a broader range of devices?

Motivation

The emergence of an array of devices that accelerates neural network computations, such as Apple silicon, AMD GPUs, and Ascend NPU, has provided more options beyond the widely used NVIDIA GPUs.
Although I understand that some of the NVIDIA GPU-specific optimization strategies may not yield equivalent performance on these other platforms, the availability of the core features of bitsandbytes on more platforms would make a significant difference.

Contribution

Our team is interested in bringing bitsandbytes support to the Ascend NPU devices. We can certainly contribute by developing and submitting the necessary code via a PR.
Is there any existing plan about expanding device support that we should be aware of? Thank you!

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions