Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add library specific handling of tensors in logits processors #1498

Merged

Conversation

RobinPicard
Copy link
Contributor

@RobinPicard RobinPicard commented Mar 15, 2025

This PR addresses issue #1445

The PR creates classes of type TensorHandler for various tensor libraries. Such classes are used by the logit processors to avoid having to manipulate the input_its/logits tensors itself. This solution allows us not to require the user to download all tensor libraries supported by outlines and to treat the tensor in their native type without having to turn them into torch tensors. We also modify the local models to possess an attribute tensor_library_name to indicate to the logits processor what TensorHandler implementation to use.

Thanks to those change, we remove the mandatory dependencies on torch and numpy.

@RobinPicard RobinPicard force-pushed the remove_mandatory_dependency_torch branch 2 times, most recently from 6304811 to 6217d08 Compare March 16, 2025 14:36
@RobinPicard RobinPicard marked this pull request as ready for review March 16, 2025 15:02
@RobinPicard RobinPicard requested a review from rlouf March 16, 2025 15:03
@RobinPicard RobinPicard self-assigned this Mar 16, 2025
@RobinPicard RobinPicard added this to the 1.0 milestone Mar 16, 2025
@RobinPicard RobinPicard linked an issue Mar 16, 2025 that may be closed by this pull request
@rlouf
Copy link
Member

rlouf commented Mar 17, 2025

I was originally skeptical of the tensor handler the approach but it's growing on me. I would use the name tensor_adapter though.

Thanks to those change, we remove the mandatory dependencies on torch and numpy.

🎉

@RobinPicard RobinPicard force-pushed the remove_mandatory_dependency_torch branch from 6217d08 to 8762761 Compare March 18, 2025 08:17
@RobinPicard
Copy link
Contributor Author

I changed the name from tensor_handler to tensor_adapter @rlouf

@rlouf rlouf force-pushed the remove_mandatory_dependency_torch branch from 8762761 to 733eeed Compare March 18, 2025 13:19
@RobinPicard RobinPicard force-pushed the remove_mandatory_dependency_torch branch 5 times, most recently from 71dc231 to 6b250b4 Compare March 19, 2025 20:56

def __init__(self, model: "Llama"):
def __init__(self, model: "Llama", tensor_library_name: Optional[str] = None):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't tensor_library_name for llama-cpp-python necessarily numpy?


def __init__(
self,
model: "nn.Module",
tokenizer: "PreTrainedTokenizer",
tensor_library_name: Optional[str] = None,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't it necessarily mlx?

@@ -163,11 +163,13 @@ def format_output_type(self, output_type):

class Transformers(Model):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We may be able to do this automatically. It looks like all models implemented implemented with JAX inherit from FlaxPretrainedModel

@@ -139,8 +144,8 @@ def generate_stream(self, model_input, output_type, **inference_kwargs):
)


def from_vllm(model: "LLM") -> VLLM:
return VLLM(model)
def from_vllm(model: "LLM", tensor_library_name: Optional[str] = None) -> VLLM:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't it necessarily torch? I don't think vLLM supports any other backend.

@@ -40,7 +39,6 @@ dependencies = [
"typing_extensions",
"iso3166",
"airportsdata",
"torch",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🥳

Copy link
Member

@rlouf rlouf left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good overall. The main comment I have is that I don't think that any library other than transformers supports more than one backend, and even for transformers there may be a way to determine the backend automatically.

@RobinPicard RobinPicard force-pushed the remove_mandatory_dependency_torch branch from 6b250b4 to 407eddf Compare March 23, 2025 22:22
@RobinPicard
Copy link
Contributor Author

Yes, you're right. I completely removed the possibility of specifying the tensor library to use when initiating a model as there's no case in which it's necessary. As a result I could simplify the logic the tensor_library_name attribute (just a simple attribute without distinct default value now).

@RobinPicard RobinPicard force-pushed the remove_mandatory_dependency_torch branch from 407eddf to 2aaa146 Compare March 23, 2025 22:34
@@ -185,11 +185,18 @@ def __init__(
the `transformers` API for tokenizers.

"""
from transformers import FlaxPreTrainedModel
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like this requires flax to be installed :/ Let's see if we can do this another way.

Copy link
Contributor Author

@RobinPicard RobinPicard Mar 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can't we just add flax to the test dependencies? We already have jax there

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It will still fail whenever users try to initialise a transformers model. We could use try/except for the checks (also add TFPretrainedModel).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah yes, right

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've added a TensorAdapter for tensorflow

@RobinPicard RobinPicard force-pushed the remove_mandatory_dependency_torch branch from 2aaa146 to 1d00c9a Compare March 23, 2025 22:46
@RobinPicard RobinPicard force-pushed the remove_mandatory_dependency_torch branch 2 times, most recently from 3318a42 to 0951993 Compare March 24, 2025 09:45
@RobinPicard RobinPicard force-pushed the remove_mandatory_dependency_torch branch 2 times, most recently from 60f96bb to ae46cff Compare March 24, 2025 14:30
@RobinPicard RobinPicard requested a review from rlouf March 24, 2025 14:51
@rlouf
Copy link
Member

rlouf commented Mar 25, 2025

Do we have tests that initialize a Transformers instance for each of the backend transformers supports? If not that would be why coverage is failing.

@RobinPicard RobinPicard force-pushed the remove_mandatory_dependency_torch branch from ae46cff to ed9b572 Compare March 25, 2025 08:07
@RobinPicard
Copy link
Contributor Author

RobinPicard commented Mar 25, 2025

Yes. The issue was that there are lines in the Transformers model that are executed only if some libraries are not available. I added a coverage exclusion rule for them.

@RobinPicard RobinPicard merged commit 246dd80 into dottxt-ai:v1.0 Mar 25, 2025
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Create library-specific versions of logits processors
2 participants