Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix accuracy reward for math #566

Merged
merged 7 commits into from
Apr 1, 2025
Merged

Fix accuracy reward for math #566

merged 7 commits into from
Apr 1, 2025

Conversation

lewtun
Copy link
Member

@lewtun lewtun commented Mar 31, 2025

This PR adds an important fix to the accuracy_reward() function to:

  • parse non-Latex gold solutions like "6" more robustly
  • fix the order of arguments in verify()
  • replace the reward for failed parsings with None instead of 1. This will exclude corrupted samples from the loss => more stable

The latter is particular important to avoid getting spurious reward curves where the base accuracy is high simply because the gold answers could not be parsed.

Here are some WandB logs to show the effect, but you can already see it in this screenshot (red = main while green = this pr)

Screenshot 2025-03-31 at 20 54 27

I am not sure why we decided to set the default reward to 1 for parsing errors, but it seems counterintuitive to me.

Fixes #557


def test_accuracy_reward_wrong_answer_no_latex(self):
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This test fails on main because we defaulted to reward = 1 when the gold answer could not be parsed.

"""Reward function that checks if the completion is the same as the ground truth."""
contents = [completion[0]["content"] for completion in completions]
rewards = []
for content, sol in zip(contents, solution):
gold_parsed = parse(
sol,
extraction_mode="first_match",
extraction_config=[LatexExtractionConfig()],
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is better to use the default extraction config to allow one to also parse pure numbers like 6 in the answer

@qgallouedec
Copy link
Member

@lewtun do you mind also fixing #566 in this PR?

Copy link
Collaborator

@edbeeching edbeeching left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, is it worth filtering the datasets we are working with to ensure all gold answers can be parsed?

@lewtun lewtun merged commit 4f5b21e into main Apr 1, 2025
1 check passed
@lewtun lewtun deleted the fix-acc-reward branch April 1, 2025 10:04
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

accuracy_reward: difference in ordering of arguments in verify?
3 participants