vectorizer-v1-S-en / README.md
youval's picture
Update model card (#2)
960253a verified
|
raw
history blame
4.27 kB
metadata
pipeline_tag: sentence-similarity
tags:
  - feature-extraction
  - sentence-similarity
language:
  - en

Model Card for vectorizer-v1-S-en

This model is a vectorizer developed by Sinequa. It produces an embedding vector given a passage or a query. The passage vectors are stored in our vector index and the query vector is used at query time to look up relevant passages in the index.

Model name: vectorizer-v1-S-en

Supported Languages

The model was trained and tested in the following languages:

  • English

Scores

Metric Value
Relevance (Recall@100) 0.456

Note that the relevance score is computed as an average over 14 retrieval datasets (see details below).

Inference Times

GPU Quantization type Batch size 1 Batch size 32
NVIDIA A10 FP16 1 ms 4 ms
NVIDIA A10 FP32 2 ms 13 ms
NVIDIA T4 FP16 1 ms 13 ms
NVIDIA T4 FP32 2 ms 52 ms
NVIDIA L4 FP16 1 ms 5 ms
NVIDIA L4 FP32 2 ms 18 ms

Gpu Memory usage

Quantization type Memory
FP16 300 MiB
FP32 500 MiB

Note that GPU memory usage only includes how much GPU memory the actual model consumes on an NVIDIA T4 GPU with a batch size of 32. It does not include the fix amount of memory that is consumed by the ONNX Runtime upon initialization which can be around 0.5 to 1 GiB depending on the used GPU.

Requirements

  • Minimal Sinequa version: 11.10.0
  • Minimal Sinequa version for using FP16 models and GPUs with CUDA compute capability of 8.9+ (like NVIDIA L4): 11.11.0
  • Cuda compute capability: above 5.0 (above 6.0 for FP16 use)

Model Details

Overview

  • Number of parameters: 29 million
  • Base language model: English BERT-Small
  • Insensitive to casing and accents
  • Output dimensions: 256 (reduced with an additional dense layer)
  • Training procedure: A first model was trained with query-passage pairs, using the in-batch negative strategy with this loss. A second model was then trained on query-passage-negative triplets with negatives mined from the previous model, like a variant of ANCE but with different hyper parameters.

Training Data

The model was trained on a Sinequa curated version of Google's Natural Questions.

Evaluation Metrics

To determine the relevance score, we averaged the results that we obtained when evaluating on the datasets of the BEIR benchmark. Note that all these datasets are in English.

Dataset Recall@100
Average 0.456
Arguana 0.832
CLIMATE-FEVER 0.342
DBPedia Entity 0.299
FEVER 0.660
FiQA-2018 0.301
HotpotQA 0.434
MS MARCO 0.610
NFCorpus 0.159
NQ 0.671
Quora 0.966
SCIDOCS 0.194
SciFact 0.592
TREC-COVID 0.037
Webis-Touche-2020 0.285