ℹ️ This is the 4-shot variant!
Measuring Mathematical Problem Solving With the MATH Dataset https://arxiv.org/abs/2103.03874
Many intellectual endeavors require mathematical problem solving, but this skill remains beyond the capabilities of computers. To measure this ability in machine learning models, we introduce MATH, a new dataset of 12,500 challenging competition mathematics problems. Each problem in MATH has a full step-by-step solution which can be used to teach models to generate answer derivations and explanations.
NOTE: The few-shot and the generated answer extraction is based on the Minerva and
exact match equivalence is calculated using the sympy library. This requires additional dependencies, which can be
installed via the lm-eval[math] extra.
Homepage: https://github.com/hendrycks/math
@article{hendrycksmath2021,
title={Measuring Mathematical Problem Solving With the MATH Dataset},
author={Dan Hendrycks and Collin Burns and Saurav Kadavath and Akul Arora and Steven Basart and Eric Tang and Dawn Song and Jacob Steinhardt},
journal={Advances in neural information processing systems},
year={2021}
}
@article{lewkowycz2022solving,
title={Solving quantitative reasoning problems with language models},
author={Lewkowycz, Aitor and Andreassen, Anders and Dohan, David and Dyer, Ethan and Michalewski, Henryk and Ramasesh, Vinay and Slone, Ambrose and Anil, Cem and Schlag, Imanol and Gutman-Solo, Theo and others},
journal={Advances in neural information processing systems},
volume={35},
pages={3843--3857},
year={2022}
}
@misc{kydlicek2025fixing,
title={Fixing open llm leaderboard with math-verify},
author={Kydlicek, Hynek and Lozovskaya, Alina and Habib, Nathan and Fourrier, Cl{\'e}mentine},
year={2025}
}
minerva_math
minerva_math_algebraminerva_math_counting_and_probminerva_math_geometryminerva_math_intermediate_algebraminerva_math_num_theoryminerva_math_prealgebraminerva_math_precalcminerva_math500
The checklist is the following:
For adding novel benchmarks/datasets to the library:
- Is the task an existing benchmark in the literature?
- Have you referenced the original paper that introduced the task?
- If yes, does the original paper provide a reference implementation? If so, have you checked against the
reference implementation and documented how to run such a test?
- The implementation in the original paper is one where the model is first fine-tuned on the data. They do have a few-shot evaluation for GPT-3, however the few-shot context used here is sourced from Lewkowycz et al. The achieved accuracy on Llama-2 models is comparable to that provided in the paper, though not identical.
If other tasks on this dataset are already supported:
- Is the "Main" variant of this task clearly denoted?
- Have you provided a short sentence in a README on what each new variant adds / evaluates?
- Have you noted which, if any, published evaluation setups are matched by this variant?
- zero-shot variant
- version 2.0: (21-Feb-2025); added math_verify (extraction) metric. For details see
- version 3.0 (21-Aug-2025); pass the full solution and model generation to
math_verify'sparse