-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
⛔ Add EOS token to processed input in SFT #3091
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for the gemma3 generation issue?
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
@qgallouedec I've faced this multiple times. I think it's just because of the (not so good) practice in the examples everywhere setting the pad token to the eos token. Then the SFT preprocessing masks everything that's a pad token (=eos token), including the real eos token in the chat template. |
Personally, I don't think these forced patches are a good design. I understand that you want the Trainers to work out of the box, but users should still make sure they have a chat template that adds an eos properly. In case someone doesn't want an eos they can't do that anymore now. |
i got this TypeError: 'Qwen2TokenizerFast' object is not subscriptabl, after change this code |
Learn to generate EOS.