-
Notifications
You must be signed in to change notification settings - Fork 586
Expand file tree
/
Copy pathdl22-doc-segmented-msmarco-v2.1.splade-v3.cached.template
More file actions
88 lines (56 loc) · 4.16 KB
/
dl22-doc-segmented-msmarco-v2.1.splade-v3.cached.template
File metadata and controls
88 lines (56 loc) · 4.16 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
# Anserini Regressions: TREC 2022 DL Track on V2.1 Corpus
**Model**: [SPLADE-v3](https://arxiv.org/abs/2403.06789) (using cached queries)
This page describes experiments, integrated into Anserini's regression testing framework, on the [TREC 2022 Deep Learning Track document ranking task](https://trec.nist.gov/data/deep2022.html) using the MS MARCO V2.1 _segmented_ document corpus, which was derived from the MS MARCO V2 segmented document corpus and prepared for the TREC 2024 RAG Track.
Note that the NIST relevance judgments provide far more relevant documents per topic, unlike the "sparse" judgments provided by Microsoft (these are sometimes called "dense" judgments to emphasize this contrast).
An important caveat is that these document judgments were inferred from the passages.
That is, if a passage is relevant, the document containing it is considered relevant.
The model itself can be download [here](https://huggingface.co/naver/splade-v3).
See the [official SPLADE repo](https://github.com/naver/splade) and the following paper for more details:
> Carlos Lassance, Hervé Déjean, Thibault Formal, and Stéphane Clinchant. [SPLADE-v3: New baselines for SPLADE.](https://arxiv.org/abs/2403.06789) _arXiv:2403.06789_.
In these experiments, we are using cached queries (i.e., cached results of query encoding).
The exact configurations for these regressions are stored in [this YAML file](${yaml}).
Note that this page is automatically generated from [this template](${template}) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead.
From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end:
```bash
bin/run.sh io.anserini.reproduce.ReproduceFromDocumentCollection --index --verify --search --config ${test_name}
```
We make available a version of the MS MARCO V2.1 segmented document corpus that has already been encoded with SPLADE-v3.
From any machine, the following command will download the corpus and perform the complete regression, end to end:
```bash
bin/run.sh io.anserini.reproduce.ReproduceFromDocumentCollection --download --index --verify --search --config ${test_name}
```
The `run_regression.py` script automates the following steps, but if you want to perform each step manually, simply copy/paste from the commands below and you'll obtain the same regression results.
## Corpus Download
Download the corpus and unpack into `collections/`:
```bash
wget ${download_url} -P collections/
tar xvf collections/${download_corpus}.tar -C collections/
```
To confirm, `${corpus}.tar` is 125 GB and has MD5 checksum `${download_checksum}`.
With the corpus downloaded, the following command will perform the remaining steps below:
```bash
bin/run.sh io.anserini.reproduce.ReproduceFromDocumentCollection --index --verify --search --config ${test_name} \
--corpus-path collections/${download_corpus}
```
## Indexing
Typical indexing command:
```bash
${index_cmds}
```
The setting of `-input` should be a directory containing the compressed `jsonl` files that comprise the corpus.
The important indexing options to note here are `-impact -pretokenized`: the first tells Anserini not to encode BM25 doclengths into Lucene's norms (which is the default) and the second option says not to apply any additional tokenization on the pre-encoded tokens.
For additional details, see explanation of [common indexing options](${root_path}/docs/common-indexing-options.md).
## Retrieval
Topics and qrels are stored [here](https://github.com/castorini/anserini-tools/tree/master/topics-and-qrels), which is linked to the Anserini repo as a submodule.
The regression experiments here evaluate on the 76 topics for which NIST has provided _inferred_ judgments as part of the [TREC 2022 Deep Learning Track](https://trec.nist.gov/data/deep2022.html), but projected over to the V2.1 version of the corpus.
After indexing has completed, you should be able to perform retrieval as follows:
```bash
${ranking_cmds}
```
Evaluation can be performed using `trec_eval`:
```bash
${eval_cmds}
```
## Effectiveness
With the above commands, you should be able to reproduce the following results:
${effectiveness}