-
Notifications
You must be signed in to change notification settings - Fork 462
/
beir-v1.0.0-signal1m.bge-base-en-v1.5.parquet.flat-int8.cached.template
64 lines (40 loc) · 2.8 KB
/
beir-v1.0.0-signal1m.bge-base-en-v1.5.parquet.flat-int8.cached.template
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
# Anserini Regressions: BEIR (v1.0.0) — Signal-1M
**Model**: [BGE-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) with quantized flat indexes (using cached queries)
This page describes regression experiments, integrated into Anserini's regression testing framework, using the [BGE-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) model on [BEIR (v1.0.0) — Signal-1M](http://beir.ai/), as described in the following paper:
> Shitao Xiao, Zheng Liu, Peitian Zhang, and Niklas Muennighoff. [C-Pack: Packaged Resources To Advance General Chinese Embedding.](https://arxiv.org/abs/2309.07597) _arXiv:2309.07597_, 2023.
In these experiments, we are using cached queries (i.e., cached results of query encoding).
The exact configurations for these regressions are stored in [this YAML file](${yaml}).
Note that this page is automatically generated from [this template](${template}) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead and then run `bin/build.sh` to rebuild the documentation.
From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end:
```
python src/main/python/run_regression.py --index --verify --search --regression ${test_name}
```
All the BEIR corpora, encoded by the BGE-base-en-v1.5 model and stored in Parquet format, are available for download:
```bash
wget https://rgw.cs.uwaterloo.ca/pyserini/data/beir-v1.0.0-bge-base-en-v1.5.parquet.tar -P collections/
tar xvf collections/beir-v1.0.0-bge-base-en-v1.5.parquet.tar -C collections/
```
The tarball is 194 GB and has MD5 checksum `c279f9fc2464574b482ec53efcc1c487`.
After download and unpacking the corpora, the `run_regression.py` command above should work without any issue.
## Indexing
Sample indexing command, building quantized flat indexes:
```
${index_cmds}
```
The path `/path/to/${corpus}/` should point to the corpus downloaded above.
## Retrieval
Topics and qrels are stored [here](https://github.com/castorini/anserini-tools/tree/master/topics-and-qrels), which is linked to the Anserini repo as a submodule.
After indexing has completed, you should be able to perform retrieval as follows:
```
${ranking_cmds}
```
Evaluation can be performed using `trec_eval`:
```
${eval_cmds}
```
## Effectiveness
With the above commands, you should be able to reproduce the following results:
${effectiveness}
The above figures are from running brute-force search with cached queries on non-quantized flat indexes.
With cached queries on quantized flat indexes, observed results may differ slightly (typically, lower), but scores should generally be within 0.004 of the results reported above (with some outliers).
Note that quantization is non-deterministic due to sampling (i.e., results may differ slightly between trials).