Skip to content

Remove constraint on max sequence length#526

Open
yawenzzzz wants to merge 2 commits intomainfrom
yawenz/20260406_remove_max_sequence_length
Open

Remove constraint on max sequence length#526
yawenzzzz wants to merge 2 commits intomainfrom
yawenz/20260406_remove_max_sequence_length

Conversation

@yawenzzzz
Copy link
Copy Markdown
Collaborator

When rslearn aligns timestamps across multiple modalities, the resulting temporal sequence can be > 12 timesteps. Previously, CompositeEncodings pre-allocated a fixed-size position encoding table based on max_sequence_length (default 12), which will raise an error when the input had more timesteps than the table size. The fix here computes the temporal position encoding on-the-fly from the actual number of input timesteps, removing the fixed limit. It's also backward compatible since we directly drop the pos_embed parameter from old checkpoints.

@github-actions github-actions Bot added the size/s label Apr 6, 2026
Copy link
Copy Markdown
Collaborator

@favyen2 favyen2 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks for updating this!

@robmarkcole
Copy link
Copy Markdown
Contributor

I'm quite keen to test this, @yawenzzzz do you have any results to share?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants