File size: 3,155 Bytes
23f11cd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
---
pipeline_tag: sentence-similarity
tags:
- feature-extraction
- sentence-similarity
language:
  - en
---

# Model Card for `vectorizer.vanilla`

This model is a vectorizer developed by Sinequa. It produces an embedding vector given a passage or a query. The
passage vectors are stored in our vector index and the query vector is used at query time to look up relevant passages
in the index.

Model name: `vectorizer.vanilla`

## Supported Languages

The model was trained and tested in the following languages:

- English

## Scores

| Metric                 | Value |
|:-----------------------|------:|
| Relevance (Recall@100) | 0.639 |

Note that the relevance score is computed as an average over 14 retrieval datasets (see
[details below](#evaluation-metrics)).

## Inference Times

| GPU        | Batch size 1 (at query time) | Batch size 32 (at indexing) |
|:-----------|-----------------------------:|----------------------------:|
| NVIDIA A10 |                         2 ms |                       19 ms |
| NVIDIA T4  |                         4 ms |                       53 ms |

The inference times only measure the time the model takes to process a single batch, it does not include pre- or
post-processing steps like the tokenization.

## Requirements

- Minimal Sinequa version: 11.10.0
- GPU memory usage: 330 MiB

Note that GPU memory usage only includes how much GPU memory the actual model consumes on an NVIDIA T4 GPU with a batch
size of 32. It does not include the fix amount of memory that is consumed by the ONNX Runtime upon initialization which
can be around 0.5 to 1 GiB depending on the used GPU.

## Model Details

### Overview

- Number of parameters: 23 million
- Base language model: [English MiniLM-L6-H384](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased)
- Insensitive to casing and accents
- Output dimensions: 256 (reduced with an additional dense layer)
- Training procedure: Query-passage-negative triplets for datasets that have mined hard negative data, Query-passage pairs for the rest. Number of negatives is augmented with in-batch negative strategy.

### Training Data

The model have been trained using all datasets that are cited in the [all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) model.

### Evaluation Metrics

To determine the relevance score, we averaged the results that we obtained when evaluating on the datasets of the
[BEIR benchmark](https://github.com/beir-cellar/beir). Note that all these datasets are in English.

| Dataset           | Recall@100 |
|:------------------|-----------:|
| Average           |      0.639 |
|                   |            |
| Arguana           |      0.969 |
| CLIMATE-FEVER     |      0.509 |
| DBPedia Entity    |      0.409 |
| FEVER             |      0.839 |
| FiQA-2018         |      0.702 |
| HotpotQA          |      0.609 |
| MS MARCO          |      0.849 |
| NFCorpus          |      0.315 |
| NQ                |      0.786 |
| Quora             |      0.995 |
| SCIDOCS           |      0.497 |
| SciFact           |      0.911 |
| TREC-COVID        |      0.129 |
| Webis-Touche-2020 |      0.427 |