Model sources and card
Browse files- 1_Pooling/config.json +7 -0
- README.md +84 -0
- config.json +24 -0
- modules.json +20 -0
- pytorch_model.bin +3 -0
- reduction_layer.bin +3 -0
- tokenizer.json +0 -0
1_Pooling/config.json
ADDED
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"word_embedding_dimension": 512,
|
3 |
+
"pooling_mode_cls_token": false,
|
4 |
+
"pooling_mode_mean_tokens": true,
|
5 |
+
"pooling_mode_max_tokens": false,
|
6 |
+
"pooling_mode_mean_sqrt_len_tokens": false
|
7 |
+
}
|
README.md
ADDED
@@ -0,0 +1,84 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- en
|
4 |
+
---
|
5 |
+
|
6 |
+
# Model Card for `vectorizer-v1-S-en`
|
7 |
+
|
8 |
+
This model is a vectorizer developed by Sinequa. It produces an embedding vector given a passage or a query. The
|
9 |
+
passage vectors are stored in our vector index and the query vector is used at query time to look up relevant passages
|
10 |
+
in the index.
|
11 |
+
|
12 |
+
Model name: `vectorizer-v1-S-en`
|
13 |
+
|
14 |
+
## Supported Languages
|
15 |
+
|
16 |
+
The model was trained and tested in the following languages:
|
17 |
+
|
18 |
+
- English
|
19 |
+
|
20 |
+
## Scores
|
21 |
+
|
22 |
+
| Metric | Value |
|
23 |
+
|:-----------------------|------:|
|
24 |
+
| Relevance (Recall@100) | 0.456 |
|
25 |
+
|
26 |
+
Note that the relevance score is computed as an average over 14 retrieval datasets (see
|
27 |
+
[details below](#evaluation-metrics)).
|
28 |
+
|
29 |
+
## Inference Times
|
30 |
+
|
31 |
+
| GPU | Batch size 1 (at query time) | Batch size 32 (at indexing) |
|
32 |
+
|:-----------|-----------------------------:|----------------------------:|
|
33 |
+
| NVIDIA A10 | 2 ms | 14 ms |
|
34 |
+
| NVIDIA T4 | 4 ms | 52 ms |
|
35 |
+
|
36 |
+
The inference times only measure the time the model takes to process a single batch, it does not include pre- or
|
37 |
+
post-processing steps like the tokenization.
|
38 |
+
|
39 |
+
## Requirements
|
40 |
+
|
41 |
+
- Minimal Sinequa version: 11.10.0
|
42 |
+
- GPU memory usage: 330 MiB
|
43 |
+
|
44 |
+
Note that GPU memory usage only includes how much GPU memory the actual model consumes on an NVIDIA T4 GPU with a batch
|
45 |
+
size of 32. It does not include the fix amount of memory that is consumed by the ONNX Runtime upon initialization which
|
46 |
+
can be around 0.5 to 1 GiB depending on the used GPU.
|
47 |
+
|
48 |
+
## Model Details
|
49 |
+
|
50 |
+
### Overview
|
51 |
+
|
52 |
+
- Number of parameters: 29 million
|
53 |
+
- Base language model: [English BERT-Small](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8)
|
54 |
+
- Insensitive to casing and accents
|
55 |
+
- Output dimensions: 256 (reduced with an additional dense layer)
|
56 |
+
- Training procedure: TBD
|
57 |
+
|
58 |
+
### Training Data
|
59 |
+
|
60 |
+
TBD
|
61 |
+
|
62 |
+
### Evaluation Metrics
|
63 |
+
|
64 |
+
To determine the relevance score, we averaged the results that we obtained when evaluating on the datasets of the
|
65 |
+
[BEIR benchmark](https://github.com/beir-cellar/beir). Note that all these datasets are in English.
|
66 |
+
|
67 |
+
| Dataset | Recall@100 |
|
68 |
+
|:------------------|-----------:|
|
69 |
+
| Average | 0.456 |
|
70 |
+
| | |
|
71 |
+
| Arguana | 0.832 |
|
72 |
+
| CLIMATE-FEVER | 0.342 |
|
73 |
+
| DBPedia Entity | 0.299 |
|
74 |
+
| FEVER | 0.660 |
|
75 |
+
| FiQA-2018 | 0.301 |
|
76 |
+
| HotpotQA | 0.434 |
|
77 |
+
| MS MARCO | 0.610 |
|
78 |
+
| NFCorpus | 0.159 |
|
79 |
+
| NQ | 0.671 |
|
80 |
+
| Quora | 0.966 |
|
81 |
+
| SCIDOCS | 0.194 |
|
82 |
+
| SciFact | 0.592 |
|
83 |
+
| TREC-COVID | 0.037 |
|
84 |
+
| Webis-Touche-2020 | 0.285 |
|
config.json
ADDED
@@ -0,0 +1,24 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"architectures": [
|
3 |
+
"BertModel"
|
4 |
+
],
|
5 |
+
"attention_probs_dropout_prob": 0.1,
|
6 |
+
"classifier_dropout": null,
|
7 |
+
"hidden_act": "gelu",
|
8 |
+
"hidden_dropout_prob": 0.1,
|
9 |
+
"hidden_size": 512,
|
10 |
+
"initializer_range": 0.02,
|
11 |
+
"intermediate_size": 2048,
|
12 |
+
"layer_norm_eps": 1e-12,
|
13 |
+
"max_position_embeddings": 512,
|
14 |
+
"model_type": "bert",
|
15 |
+
"num_attention_heads": 8,
|
16 |
+
"num_hidden_layers": 4,
|
17 |
+
"pad_token_id": 0,
|
18 |
+
"position_embedding_type": "absolute",
|
19 |
+
"torch_dtype": "float32",
|
20 |
+
"transformers_version": "4.15.0.dev0",
|
21 |
+
"type_vocab_size": 2,
|
22 |
+
"use_cache": true,
|
23 |
+
"vocab_size": 30522
|
24 |
+
}
|
modules.json
ADDED
@@ -0,0 +1,20 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
[
|
2 |
+
{
|
3 |
+
"idx": 0,
|
4 |
+
"name": "0",
|
5 |
+
"path": "",
|
6 |
+
"type": "sentence_transformers.models.Transformer"
|
7 |
+
},
|
8 |
+
{
|
9 |
+
"idx": 1,
|
10 |
+
"name": "1",
|
11 |
+
"path": "1_Pooling",
|
12 |
+
"type": "sentence_transformers.models.Pooling"
|
13 |
+
},
|
14 |
+
{
|
15 |
+
"idx": 2,
|
16 |
+
"name": "2",
|
17 |
+
"path": "2_Normalize",
|
18 |
+
"type": "sentence_transformers.models.Normalize"
|
19 |
+
}
|
20 |
+
]
|
pytorch_model.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f52fb2264f3f0fcd81ee95ffb0f80eaefb424981157c3e999d19753c73d37933
|
3 |
+
size 115084877
|
reduction_layer.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:5307a192d51eb598680ed05795a0836b3108ff596e9c3595a7e39246feada790
|
3 |
+
size 526311
|
tokenizer.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|