Correct dataset definition
Browse files
README.md
CHANGED
@@ -1,85 +1,84 @@
|
|
1 |
-
---
|
2 |
-
language:
|
3 |
-
- en
|
4 |
-
---
|
5 |
-
|
6 |
-
# Model Card for `passage-ranker-v1-XS-en`
|
7 |
-
|
8 |
-
This model is a passage ranker developed by Sinequa. It produces a relevance score given a query-passage pair and is
|
9 |
-
used to order search results.
|
10 |
-
|
11 |
-
Model name: `passage-ranker-v1-XS-en`
|
12 |
-
|
13 |
-
## Supported Languages
|
14 |
-
|
15 |
-
The model was trained and tested in the following languages:
|
16 |
-
|
17 |
-
- English
|
18 |
-
|
19 |
-
## Scores
|
20 |
-
|
21 |
-
| Metric | Value |
|
22 |
-
|:--------------------|------:|
|
23 |
-
| Relevance (NDCG@10) | 0.438 |
|
24 |
-
|
25 |
-
Note that the relevance score is computed as an average over 14 retrieval datasets (see
|
26 |
-
[details below](#evaluation-metrics)).
|
27 |
-
|
28 |
-
## Inference Times
|
29 |
-
|
30 |
-
| GPU | Batch size 32 |
|
31 |
-
|:-----------|--------------:|
|
32 |
-
| NVIDIA A10 | 8 ms |
|
33 |
-
| NVIDIA T4 | 20 ms |
|
34 |
-
|
35 |
-
The inference times only measure the time the model takes to process a single batch, it does not include pre- or
|
36 |
-
post-processing steps like the tokenization.
|
37 |
-
|
38 |
-
## Requirements
|
39 |
-
|
40 |
-
- Minimal Sinequa version: 11.10.0
|
41 |
-
- GPU memory usage: 170 MiB
|
42 |
-
|
43 |
-
Note that GPU memory usage only includes how much GPU memory the actual model consumes on an NVIDIA T4 GPU with a batch
|
44 |
-
size of 32. It does not include the fix amount of memory that is consumed by the ONNX Runtime upon initialization which
|
45 |
-
can be around 0.5 to 1 GiB depending on the used GPU.
|
46 |
-
|
47 |
-
## Model Details
|
48 |
-
|
49 |
-
### Overview
|
50 |
-
|
51 |
-
- Number of parameters: 11 million
|
52 |
-
- Base language model: [English BERT-Mini](https://huggingface.co/google/bert_uncased_L-4_H-256_A-4)
|
53 |
-
- Insensitive to casing and accents
|
54 |
-
- Training procedure: [MonoBERT](https://arxiv.org/abs/1901.04085)
|
55 |
-
|
56 |
-
### Training Data
|
57 |
-
|
58 |
-
-
|
59 |
-
([Paper](https://arxiv.org/abs/
|
60 |
-
[Official Page](https://
|
61 |
-
|
62 |
-
|
63 |
-
|
64 |
-
|
65 |
-
|
66 |
-
|
67 |
-
|
68 |
-
|
69 |
-
|
70 |
-
|
|
71 |
-
|
|
72 |
-
|
|
73 |
-
|
|
74 |
-
|
|
75 |
-
|
|
76 |
-
|
|
77 |
-
|
|
78 |
-
|
|
79 |
-
|
|
80 |
-
|
|
81 |
-
|
|
82 |
-
|
|
83 |
-
|
|
84 |
-
|
|
85 |
-
| Webis-Touche-2020 | 0.306 |
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- en
|
4 |
+
---
|
5 |
+
|
6 |
+
# Model Card for `passage-ranker-v1-XS-en`
|
7 |
+
|
8 |
+
This model is a passage ranker developed by Sinequa. It produces a relevance score given a query-passage pair and is
|
9 |
+
used to order search results.
|
10 |
+
|
11 |
+
Model name: `passage-ranker-v1-XS-en`
|
12 |
+
|
13 |
+
## Supported Languages
|
14 |
+
|
15 |
+
The model was trained and tested in the following languages:
|
16 |
+
|
17 |
+
- English
|
18 |
+
|
19 |
+
## Scores
|
20 |
+
|
21 |
+
| Metric | Value |
|
22 |
+
|:--------------------|------:|
|
23 |
+
| Relevance (NDCG@10) | 0.438 |
|
24 |
+
|
25 |
+
Note that the relevance score is computed as an average over 14 retrieval datasets (see
|
26 |
+
[details below](#evaluation-metrics)).
|
27 |
+
|
28 |
+
## Inference Times
|
29 |
+
|
30 |
+
| GPU | Batch size 32 |
|
31 |
+
|:-----------|--------------:|
|
32 |
+
| NVIDIA A10 | 8 ms |
|
33 |
+
| NVIDIA T4 | 20 ms |
|
34 |
+
|
35 |
+
The inference times only measure the time the model takes to process a single batch, it does not include pre- or
|
36 |
+
post-processing steps like the tokenization.
|
37 |
+
|
38 |
+
## Requirements
|
39 |
+
|
40 |
+
- Minimal Sinequa version: 11.10.0
|
41 |
+
- GPU memory usage: 170 MiB
|
42 |
+
|
43 |
+
Note that GPU memory usage only includes how much GPU memory the actual model consumes on an NVIDIA T4 GPU with a batch
|
44 |
+
size of 32. It does not include the fix amount of memory that is consumed by the ONNX Runtime upon initialization which
|
45 |
+
can be around 0.5 to 1 GiB depending on the used GPU.
|
46 |
+
|
47 |
+
## Model Details
|
48 |
+
|
49 |
+
### Overview
|
50 |
+
|
51 |
+
- Number of parameters: 11 million
|
52 |
+
- Base language model: [English BERT-Mini](https://huggingface.co/google/bert_uncased_L-4_H-256_A-4)
|
53 |
+
- Insensitive to casing and accents
|
54 |
+
- Training procedure: [MonoBERT](https://arxiv.org/abs/1901.04085)
|
55 |
+
|
56 |
+
### Training Data
|
57 |
+
|
58 |
+
- Probably-Asked Questions
|
59 |
+
([Paper](https://arxiv.org/abs/2102.07033),
|
60 |
+
[Official Page](https://github.com/facebookresearch/PAQ))
|
61 |
+
|
62 |
+
### Evaluation Metrics
|
63 |
+
|
64 |
+
To determine the relevance score, we averaged the results that we obtained when evaluating on the datasets of the
|
65 |
+
[BEIR benchmark](https://github.com/beir-cellar/beir). Note that all these datasets are in English.
|
66 |
+
|
67 |
+
| Dataset | NDCG@10 |
|
68 |
+
|:------------------|--------:|
|
69 |
+
| Average | 0.438 |
|
70 |
+
| | |
|
71 |
+
| Arguana | 0.524 |
|
72 |
+
| CLIMATE-FEVER | 0.150 |
|
73 |
+
| DBPedia Entity | 0.338 |
|
74 |
+
| FEVER | 0.706 |
|
75 |
+
| FiQA-2018 | 0.269 |
|
76 |
+
| HotpotQA | 0.630 |
|
77 |
+
| MS MARCO | 0.328 |
|
78 |
+
| NFCorpus | 0.340 |
|
79 |
+
| NQ | 0.429 |
|
80 |
+
| Quora | 0.722 |
|
81 |
+
| SCIDOCS | 0.141 |
|
82 |
+
| SciFact | 0.627 |
|
83 |
+
| TREC-COVID | 0.628 |
|
84 |
+
| Webis-Touche-2020 | 0.306 |
|
|