You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Add support for LlamaGen, an autoregressive image generation model, to the Transformers library. LlamaGen applies the next-token prediction paradigm of large language models to visual generation.
Autoregressive image generation model (based on Llama architecture)
Class-conditional and text-conditional image generation
Classifier-free guidance for sampling
Motivation
LlamaGen demonstrates that vanilla autoregressive models without vision-specific inductive biases can achieve state-of-the-art image generation performance. Implementing it in Transformers would enable easier experimentation and integration with existing language models.
Your contribution
I can help by contributing to this model, and provide examples and detailed explanations of the model architecture and training process if needed.
The text was updated successfully, but these errors were encountered:
Very interesting! As far as I know, we don't have image-generation models in transformers yet or am I missing it? So, wondering where is the better place for such a model, in transformers or in diffusers (it's not a diffusion model though).
cc @sayakpaul maybe
Hey! Just saw this issue, and I've been working/reviewing some VLM models that can generate image or text from image+text. TBH we have only ImageGPT as a very old architecture for image generation, very similar to llama-gen iiuc. And two more PRs are open for VLM with image generation: Chameleon's decoder VQ-VAE support which got stale due to contributor being busy and Emu3 which I hopefully can work on in the next weeks
I like Llama-Gen and I think it can be a nice addition. From what I see the model doesn't take image as input, so no inpainting or other tasks, only generation from text. It shouldn't be hard to fit in the general model API. Do we need to have any controlled structured generation for ex: limit tokens to be generated to a specific subset and length? Would be super nice if that kind of control can be done with existing LogitsProcessors, adding new processors is gonna add more maintainment burden to us
Feature request
Add support for LlamaGen, an autoregressive image generation model, to the Transformers library. LlamaGen applies the next-token prediction paradigm of large language models to visual generation.
Paper: https://arxiv.org/abs/2406.06525
Code: https://github.com/FoundationVision/LlamaGen
Key components to implement:
Motivation
LlamaGen demonstrates that vanilla autoregressive models without vision-specific inductive biases can achieve state-of-the-art image generation performance. Implementing it in Transformers would enable easier experimentation and integration with existing language models.
Your contribution
I can help by contributing to this model, and provide examples and detailed explanations of the model architecture and training process if needed.
The text was updated successfully, but these errors were encountered: