Model Card for answer-finder-v1-L-multilingual
This model is a question answering model developed by Sinequa. It produces two lists of logit scores corresponding to the start token and end token of an answer.
Model name: answer-finder-v1-L-multilingual
Supported Languages
The model was trained and tested in the following languages:
- English
- French
- German
- Spanish
Scores
Metric | Value |
---|---|
F1 Score on SQuAD v2 EN with Hugging Face evaluation pipeline | 75 |
F1 Score on SQuAD v2 EN with Haystack evaluation pipeline | 75 |
F1 Score on SQuAD v2 FR with Haystack evaluation pipeline | 73.4 |
F1 Score on SQuAD v2 DE with Haystack evaluation pipeline | 90.8 |
F1 Score on SQuAD v2 ES with Haystack evaluation pipeline | 67.1 |
Inference Time
GPU | Quantization type | Batch size 1 | Batch size 32 |
---|---|---|---|
NVIDIA A10 | FP16 | 2 ms | 30 ms |
NVIDIA A10 | FP32 | 4 ms | 83 ms |
NVIDIA T4 | FP16 | 3 ms | 65 ms |
NVIDIA T4 | FP32 | 14 ms | 373 ms |
NVIDIA L4 | FP16 | 2 ms | 38 ms |
NVIDIA L4 | FP32 | 5 ms | 124 ms |
Note that the Answer Finder models are only used at query time.
Gpu Memory usage
Quantization type | Memory |
---|---|
FP16 | 550 MiB |
FP32 | 1050 MiB |
Note that GPU memory usage only includes how much GPU memory the actual model consumes on an NVIDIA T4 GPU with a batch size of 32. It does not include the fix amount of memory that is consumed by the ONNX Runtime upon initialization which can be around 0.5 to 1 GiB depending on the used GPU.
Requirements
- Minimal Sinequa version: 11.10.0
- Minimal Sinequa version for using FP16 models and GPUs with CUDA compute capability of 8.9+ (like NVIDIA L4): 11.11.0
- Cuda compute capability: above 5.0 (above 6.0 for FP16 use)
Model Details
Overview
- Number of parameters: 110 million
- Base language model: bert-base-multilingual-cased pre-trained by Sinequa in English, French, German and Spanish
- Insensitive to casing and accents
Training Data
- SQuAD v2
- French-SQuAD + French translation of SQuAD v2 "impossible" query-passage pairs
- GermanQuAD + German translation of SQuAD v2 "impossible" query-passage pairs
- SQuAD-es-v2
- Downloads last month
- 352
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.