📣 TASK: Add evaluation results to model cards across the Hub. Together, we're building a distributed leaderboard of open source model performance.
Note
Bonus XP for contributing to the leaderboard application. Open a PR on the hub or on GitHub to get your (bonus) XP.
Model cards without evaluation data are hard to compare. By adding structured eval results to metadata, we make models easier to compare and review. Your contributions power leaderboards and help the community find the best models for their needs. Also, by doing this in a distributed way, we can share our evaluation results with the community.
- Add eval scores to the 100 trending models on the Hub
- Include AIME 2025, BigBenchHard, LiveCodeBench, MMLU, ARC on trending models.
- It is ok to include a subset of the benchmarks available for the model.
- Build a leaderboard application that shows the evaluation results for the trending models.
Taking part is simple. We need to get model authors to show evaluation results in their model cards. This is a clean up job!
| Tier | XP | Description | What Counts |
|---|---|---|---|
| 🐢 Contributor | 1 XP | Extract evaluation results from one benchmark and update its model card. | Any PR on the repo with evaluation data. |
| 🐕 Evaluator | 5 XP | Import scores from third-party benchmarks like Artificial Analysis. | Undefined benchmark scores and merged PRs. |
| 🦁 Advanced | 20 XP | Run your own evaluation with inspect-ai and publish results. | Original eval run and merged PR. |
| 🐉 Bonus | 20 XP | Contribute to the leaderboard application. | Any Merged PR on the hub or GitHub. |
| 🤢 Slop | -20 XP | Opening none useful PRs. | Duplicate PRs, Incorrect Eval Scores, Incorrect Benchmark Scores |
Warning
This hackathon is about advancing the state of open source AI. We want useful PRs that help everyone out, not just metrics.
Use hugging-face-evaluation/ for this quest. Key capabilities:
- Extract evaluation tables from existing README content posted by model authors.
- Import benchmark scores from Artificial Analysis.
- Run your own evals with inspect-ai on HF Jobs.
- Update model-index metadata in the model card.
Note
Take a look at the SKILL.md for more details.
- Pick a Hub model without evaluation data from trending models on the hub
- Use the skill to extract or add a benchmark score
- Create a PR (or push directly if you own the model)
The agent will use this script to extract evaluation tables from the model's README.
python skills/hugging-face-evaluation/scripts/evaluation_manager.py extract-readme \
--repo-id "model-author/model-name" --dry-run- Find a model with benchmark data on external sites
- Use
import-aato fetch scores from Artificial Analysis API - Create a PR with properly attributed evaluation data
The agent will use this script to fetch scores from Artificial Analysis API and add them to the model card.
python skills/hugging-face-evaluation/scripts/evaluation_manager.py import-aa \
--creator-slug "anthropic" --model-name "claude-sonnet-4" \
--repo-id "target/model" --create-pr- Choose an eval task (MMLU, GSM8K, HumanEval, etc.)
- Run the evaluation on HF Jobs infrastructure
- Update the model card with your results and methodology
The agent will use this script to run the evaluation on HF Jobs infrastructure and update the model card with the results.
HF_TOKEN=$HF_TOKEN hf jobs uv run skills/hugging-face-evaluation/scripts/inspect_eval_uv.py \
--flavor a10g-small --secret HF_TOKEN=$HF_TOKEN \
-- --model "meta-llama/Llama-2-7b-hf" --task "mmlu"- Always use
--dry-runfirst to preview changes before pushing - Check for transposed tables where models are rows and benchmarks are columns
- Be careful with PRs for models you don't own — most maintainers appreciate eval contributions but be respectful.
- Manually validate the extracted scores and close PRs if needed.
- SKILL.md — Full skill documentation
- Example Usage — Worked examples
- Metric Mapping — Standard metric types