Skip to content

[KNOWN BUG] Broken Support for TextOnly Models from torchtune  #1430

Open
@Jack-Khuu

Description

🐛 Describe the bug

Originally added in #1123, leveraging torchtune model definitions is something that torchchat is gradually moving towards (in contrast to locally hosting model definitions), but has been lost through pin bump/inactivity.

For example, commands like the following python3 torchchat.py generate llama3.1-tune --prompt "write me a story about a boy and his bear" shuold load the model definition using torchtune then pass back to torchchat for inference, but currently errors out on construction due to outdated function signatures.


Task: Re-enabling the ability to perform inference with: python3 torchchat.py generate llama3.1-tune --prompt "write me a story about a boy and his bear"

I imagine the process being iterative via a combination of tracing signature changes in torchchat and torchtune


A good gauge of this being fixed is that changes like: 69da96c, should be sufficient to support a new torchtune model in torchchat.

Versions

Current Main Hash: 90749d2

Activity

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Assignees

No one assigned

    Labels

    Known GapsThese are known Gaps/Issues/Bug items in torchchatbugSomething isn't workingtorchtuneIssue/PR related to torchtune componentstriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

    Type

    No type

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions