From a33b33e8d0ee394a809abc0ad72e43183b29d06b Mon Sep 17 00:00:00 2001 From: Nuanced <185998268+nuance-dev@users.noreply.github.com> Date: Sat, 21 Feb 2026 13:46:03 -0300 Subject: [PATCH] Add RIVAL to LLM Leaderboard section --- README.md | 1 + 1 file changed, 1 insertion(+) diff --git a/README.md b/README.md index 083ff999..c29093d3 100644 --- a/README.md +++ b/README.md @@ -153,6 +153,7 @@ - [LiveBench](https://livebench.ai/#/) - A Challenging, Contamination-Free LLM Benchmark. - [Open LLM Leaderboard](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) - aims to track, rank, and evaluate LLMs and chatbots as they are released. - [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/) - An Automatic Evaluator for Instruction-following Language Models using Nous benchmark suite. +- [Rival](https://rival.tips) - AI model comparison platform with blind preference voting across 200+ models, community-driven rankings, open datasets, and a multi-model Prompt Lab.
other leaderboards