Skip to content

Commit 3b273df

Browse files
committed
pre-commit error
Signed-off-by: Didan Deng <33117903+wtomin@users.noreply.github.com>
1 parent cc1aae0 commit 3b273df

File tree

5 files changed

+5
-2
lines changed

5 files changed

+5
-2
lines changed

docs/contributing/features/cache_dit.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -239,7 +239,6 @@ images = omni.generate(
239239

240240
**Solution:** Verify `pipeline.__class__.__name__` matches the registry key and add your enabler to `CUSTOM_DIT_ENABLERS`.
241241

242-
243242
### Issue: Quality degradation
244243

245244
**Symptoms:** Generated images have artifacts or lower quality compared to non-cached inference.

docs/contributing/features/cfg_parallel.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -153,6 +153,7 @@ class Wan22Pipeline(nn.Module, CFGParallelMixin):
153153
return current_model(**kwargs)[0]
154154
```
155155

156+
156157
### Override `cfg_normalize_function()` for Custom Normalization
157158

158159
Some models have their own normalization function. Taking LongCat Image model as an example:

docs/contributing/features/sequence_parallel.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -18,6 +18,7 @@ This section describes how to add Sequence Parallel (SP) to a diffusion transfor
1818

1919
## Overview
2020

21+
2122
### What is Sequence Parallel?
2223

2324
**Terminology Note:** Our "Sequence Parallelism" (SP) corresponds to "Context Parallelism" (CP) in the [diffusers library](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/_modeling_parallel.py). We use "Sequence Parallelism" to align with vLLM-Omni's terminology.

docs/contributing/features/teacache.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -156,7 +156,7 @@ Create a callable that executes all transformer blocks. This encapsulates the ma
156156

157157
**Key Points:**
158158

159-
- Return format:
159+
- Return format:
160160
- For single-stream models: return `(hidden_states,)`
161161
- For dual-stream models: return `(hidden_states, encoder_hidden_states)`
162162

@@ -239,6 +239,7 @@ _MODEL_COEFFICIENTS = {
239239
}
240240
```
241241

242+
242243
**Initial approach:** Start with coefficients from a similar model architecture, then tune empirically following [Customization](#customization) section.
243244

244245
---

docs/contributing/features/tensor_parallel.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -42,6 +42,7 @@ The Tensor Parallel implementation relies vLLM's Parallel Layers:
4242

4343
## Step-by-Step Implementation
4444

45+
4546
### Step 1: Identify Linear Layers
4647

4748
Find all `nn.Linear` layers in your transformer that need to be sharded.

0 commit comments

Comments
 (0)