[Feat] support TeaCache for FLUX1.dev#1243
[Feat] support TeaCache for FLUX1.dev#1243RuixiangMa wants to merge 1 commit intovllm-project:mainfrom
Conversation
Signed-off-by: Lancer <maruixiang6688@gmail.com>
6102c32 to
8025273
Compare
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 6102c32feb
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| def postprocess(h): | ||
| h = module.norm_out(h, temb) | ||
| h = module.proj_out(h) | ||
| return Transformer2DModelOutput(sample=h) |
There was a problem hiding this comment.
Preserve Flux forward return type in TeaCache postprocess
The Flux extractor ignores the caller's return_dict flag and always returns Transformer2DModelOutput, which changes the public contract of FluxTransformer2DModel.forward under TeaCache. In this repo, pipeline_flux.diffuse explicitly calls self.transformer(..., return_dict=False) and currently indexes [0], but other call sites (or downstream users) can legitimately rely on receiving a tuple when return_dict=False; this mismatch can cause behavioral regressions only when TeaCache is enabled.
Useful? React with 👍 / 👎.
Purpose
support TeaCache for FLUX1
Test Plan
Test Result
Refactored when PR #1234 is merged