[KNOWN BUG] Broken Support for TextOnly Models from torchtune #1430
Description
🐛 Describe the bug
Originally added in #1123, leveraging torchtune model definitions is something that torchchat is gradually moving towards (in contrast to locally hosting model definitions), but has been lost through pin bump/inactivity.
For example, commands like the following python3 torchchat.py generate llama3.1-tune --prompt "write me a story about a boy and his bear"
shuold load the model definition using torchtune then pass back to torchchat for inference, but currently errors out on construction due to outdated function signatures.
Task: Re-enabling the ability to perform inference with: python3 torchchat.py generate llama3.1-tune --prompt "write me a story about a boy and his bear"
I imagine the process being iterative via a combination of tracing signature changes in torchchat and torchtune
A good gauge of this being fixed is that changes like: 69da96c, should be sufficient to support a new torchtune model in torchchat.
Versions
Current Main Hash: 90749d2