You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+24-5Lines changed: 24 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -14,7 +14,8 @@ Welcome to `rerankers`! Our goal is to provide users with a simple API to use an
14
14
15
15
## Updates
16
16
17
-
- v0.2.0: 🆕 [FlashRank](https://github.com/PrithivirajDamodaran/FlashRank) rerankers, Basic async support thanks to [@tarunamasa](https://github.com/tarunamasa), MixedBread.ai reranking API
17
+
- v0.3.0: 🆕 Many changes! Experimental support for RankLLM, directly backed by the [rank-llm library](https://github.com/castorini/rank_llm). A new `Document` object, courtesy of joint-work by [@bclavie](https://github.com/bclavie) and [Anmol6](https://github.com/Anmol6). This object is transparent, but now offers support for `metadata` stored alongside each document. Many small QoL changes (RankedResults can be itered on directly...)
18
+
- v0.2.0: [FlashRank](https://github.com/PrithivirajDamodaran/FlashRank) rerankers, Basic async support thanks to [@tarunamasa](https://github.com/tarunamasa), MixedBread.ai reranking API
18
19
- v0.1.2: Voyage reranking API
19
20
- v0.1.1: Langchain integration fixed!
20
21
- v0.1.0: Initial release
@@ -59,6 +60,9 @@ pip install "rerankers[api]"
59
60
# FlashRank rerankers (ONNX-optimised, very fast on CPU)
60
61
pip install "rerankers[fastrank]"
61
62
63
+
# RankLLM rerankers (better RankGPT + support for local models such as RankZephyr and RankVicuna)
_Rerankers will always try to infer the model you're trying to use based on its name, but it's always safer to pass a `model_type` argument to it if you can!_
@@ -180,18 +199,18 @@ Legend:
180
199
181
200
Models:
182
201
- ✅ Any standard SentenceTransformer or Transformers cross-encoder
183
-
-🟠 RankGPT (Implemented using original repo, but missing the rankllm's repo improvements)
202
+
-✅ RankGPT (Available both via the original RankGPT implementation and the improved RankLLM one)
- ✅ Cohere, Jina, Voyage and MixedBread API rerankers
186
205
- ✅ [FlashRank](https://github.com/PrithivirajDamodaran/FlashRank) rerankers (ONNX-optimised models, very fast on CPU)
187
206
- 🟠 ColBERT-based reranker - not a model initially designed for reranking, but quite strong (Implementation could be optimised and is from a third-party implementation.)
188
-
- 📍 MixedBread API (Reranking API not yet released)
189
-
- 📍⭐ RankLLM/RankZephyr (Proper RankLLM implementation will replace the RankGPT one, and introduce RankZephyr support)
207
+
- 🟠⭐ RankLLM/RankZephyr: supported by wrapping the [rank-llm library](https://github.com/castorini/rank_llm) library! Support for RankZephyr/RankVicuna is untested, but RankLLM + GPT models fully works!
190
208
- 📍 LiT5
191
209
192
210
Features:
211
+
- ✅ Metadata!
193
212
- ✅ Reranking
194
213
- ✅ Consistency notebooks to ensure performance on `scifact` matches the litterature for any given model implementation (Except RankGPT, where results are harder to reproduce).
214
+
- ✅ ONNX runtime support --> Offered through [FlashRank](https://github.com/PrithivirajDamodaran/FlashRank) -- in line with the philosophy of the lib, we won't reinvent the wheel when @PrithivirajDamodaran is doing amazing work!
195
215
- 📍 Training on Python >=3.10 (via interfacing with other libraries)
196
-
- 📍 ONNX runtime support --> Unlikely to be immediate
"The key 'gpt' currently defaults to the rough rankGPT implementation. From version 0.0.5 onwards, 'gpt' will default to RankLLM instead. Please specify the 'rankgpt' `model_type` if you want to keep the current behaviour",
0 commit comments