skirres commited on
Commit
d858640
1 Parent(s): 64bde91

Model sources and card

Browse files
Files changed (4) hide show
  1. README.md +86 -0
  2. config.json +24 -0
  3. pytorch_model.bin +3 -0
  4. tokenizer.json +0 -0
README.md ADDED
@@ -0,0 +1,86 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ ---
5
+
6
+ # Model Card for `passage-ranker.chocolate`
7
+
8
+ This model is a passage ranker developed by Sinequa. It produces a relevance score given a query-passage pair and is
9
+ used to order search results.
10
+
11
+ Model name: `passage-ranker.chocolate`
12
+
13
+ ## Supported Languages
14
+
15
+ The model was trained and tested in the following languages:
16
+
17
+ - English
18
+
19
+ ## Scores
20
+
21
+ | Metric | Value |
22
+ |:--------------------|------:|
23
+ | Relevance (NDCG@10) | 0.484 |
24
+
25
+ Note that the relevance score is computed as an average over 14 retrieval datasets (see
26
+ [details below](#evaluation-metrics)).
27
+
28
+ ## Inference Times
29
+
30
+ | GPU | Batch size 32 |
31
+ |:-----------|--------------:|
32
+ | NVIDIA A10 | 22 ms |
33
+ | NVIDIA T4 | 64 ms |
34
+
35
+ The inference times only measure the time the model takes to process a single batch, it does not include pre- or
36
+ post-processing steps like the tokenization.
37
+
38
+ ## Requirements
39
+
40
+ - Minimal Sinequa version: 11.10.0
41
+ - GPU memory usage: 550 MiB
42
+
43
+ Note that GPU memory usage only includes how much GPU memory the actual model consumes on an NVIDIA T4 GPU with a batch
44
+ size of 32. It does not include the fix amount of memory that is consumed by the ONNX Runtime upon initialization which
45
+ can be around 0.5 to 1 GiB depending on the used GPU.
46
+
47
+ ## Model Details
48
+
49
+ ### Overview
50
+
51
+ - Number of parameters: 23 million
52
+ - Base language model: [MiniLM-L6-H384-uncased](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased)
53
+ ([Paper](https://arxiv.org/abs/2002.10957), [GitHub](https://github.com/microsoft/unilm/tree/master/minilm))
54
+ - Insensitive to casing and accents
55
+ - Training procedure: [MonoBERT](https://arxiv.org/abs/1901.04085)
56
+
57
+ ### Training Data
58
+
59
+ - MS MARCO Passage Ranking
60
+ ([Paper](https://arxiv.org/abs/1611.09268),
61
+ [Official Page](https://microsoft.github.io/msmarco/),
62
+ [dataset on HF hub](https://huggingface.co/datasets/unicamp-dl/mmarco))
63
+
64
+ ### Evaluation Metrics
65
+
66
+ To determine the relevance score, we averaged the results that we obtained when evaluating on the datasets of the
67
+ [BEIR benchmark](https://github.com/beir-cellar/beir). Note that all these datasets are in English.
68
+
69
+ | Dataset | NDCG@10 |
70
+ |:------------------|--------:|
71
+ | Average | 0.486 |
72
+ | | |
73
+ | Arguana | 0.554 |
74
+ | CLIMATE-FEVER | 0.209 |
75
+ | DBPedia Entity | 0.367 |
76
+ | FEVER | 0.744 |
77
+ | FiQA-2018 | 0.339 |
78
+ | HotpotQA | 0.685 |
79
+ | MS MARCO | 0.412 |
80
+ | NFCorpus | 0.352 |
81
+ | NQ | 0.454 |
82
+ | Quora | 0.818 |
83
+ | SCIDOCS | 0.158 |
84
+ | SciFact | 0.658 |
85
+ | TREC-COVID | 0.674 |
86
+ | Webis-Touche-2020 | 0.345 |
config.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "BertForSequenceClassification"
4
+ ],
5
+ "attention_probs_dropout_prob": 0.1,
6
+ "classifier_dropout": null,
7
+ "gradient_checkpointing": false,
8
+ "hidden_act": "gelu",
9
+ "hidden_dropout_prob": 0.1,
10
+ "hidden_size": 384,
11
+ "initializer_range": 0.02,
12
+ "intermediate_size": 1536,
13
+ "layer_norm_eps": 1e-12,
14
+ "max_position_embeddings": 512,
15
+ "model_type": "bert",
16
+ "num_attention_heads": 12,
17
+ "num_hidden_layers": 6,
18
+ "pad_token_id": 0,
19
+ "position_embedding_type": "absolute",
20
+ "transformers_version": "4.25.1",
21
+ "type_vocab_size": 2,
22
+ "use_cache": true,
23
+ "vocab_size": 30522
24
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:86393158c095eeeb286403c07797c391bed615a4cb70983fbb3640199f377c2e
3
+ size 90893101
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff