Skip to content

[WIP] Skip gradient collection for documents with fewer than two tokens#188

Open
luciaquirke wants to merge 3 commits intomainfrom
fix/skip-short-documents
Open

[WIP] Skip gradient collection for documents with fewer than two tokens#188
luciaquirke wants to merge 3 commits intomainfrom
fix/skip-short-documents

Conversation

@luciaquirke
Copy link
Collaborator

@luciaquirke luciaquirke commented Mar 10, 2026

Documents with fewer than 2 tokens cannot produce valid next-token labels, and length-0 documents create [N, 0] tensors that hang the model forward pass. In multi-GPU settings, the bin-packing allocator assigned all zero-length documents to a single rank (cost = 0), causing that rank to stall while others completed their NCCL all-reduces.

Fix by filtering <2-token documents from batch allocation in _allocate_batches_world. Their gradient index entries remain at the pre-initialized zero value, preserving the dataset-to-score index mapping.

TODO:

we probably want to update the "is_written" column in scores at these positions to reflect the fact that the 0. gradient is intentional. We may also want to raise an error here unless a --skip_empty_docs flag is given.

luciaquirke and others added 3 commits March 10, 2026 05:23
Documents with fewer than 2 tokens cannot produce valid next-token
labels, and length-0 documents create [N, 0] tensors that hang the
model forward pass. In multi-GPU settings, the bin-packing allocator
assigned all zero-length documents to a single rank (cost = 0), causing
that rank to stall while others completed their NCCL all-reduces.

Fix by filtering <2-token documents from batch allocation in
_allocate_batches_world. Their gradient index entries remain at the
pre-initialized zero value, preserving the dataset-to-score index
mapping.

Also chunk the normalizer backward hook to process the outer-product
matmul in groups of 32 documents, preventing OOM when many short
documents pack into a single batch (P tensor scales as N * O * I).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Documents with fewer than 2 tokens cannot produce valid next-token
labels. Length-0 documents also create [N, 0] tensors that hang the
model forward pass. The bin-packing cost function (max_len * batch_size)
gives these documents cost=0, assigning all of them to a single rank
and causing NCCL timeouts in multi-GPU runs.

Skip them in _allocate_batches_world; their gradient index entries
remain at the pre-initialized zero value.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@LouisYRYJ
Copy link
Contributor

I wonder if a print statement is sufficient here? It might silently drown under all the other stuff we log

@luciaquirke
Copy link
Collaborator Author

I wonder if a print statement is sufficient here? It might silently drown under all the other stuff we log

What do you suggest?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants