--- language: - en --- # Model Card for `passage-ranker-v1-XS-en` This model is a passage ranker developed by Sinequa. It produces a relevance score given a query-passage pair and is used to order search results. Model name: `passage-ranker-v1-XS-en` ## Supported Languages The model was trained and tested in the following languages: - English ## Scores | Metric | Value | |:--------------------|------:| | Relevance (NDCG@10) | 0.438 | Note that the relevance score is computed as an average over 14 retrieval datasets (see [details below](#evaluation-metrics)). ## Inference Times | GPU | Quantization type | Batch size 1 | Batch size 32 | |:------------------------------------------|:------------------|---------------:|---------------:| | NVIDIA A10 | FP16 | 1 ms | 2 ms | | NVIDIA A10 | FP32 | 1 ms | 8 ms | | NVIDIA T4 | FP16 | 1 ms | 6 ms | | NVIDIA T4 | FP32 | 3 ms | 23 ms | | NVIDIA L4 | FP16 | 1 ms | 3 ms | | NVIDIA L4 | FP32 | 2 ms | 8 ms | ## Gpu Memory usage | Quantization type | Memory | |:-------------------------------------------------|-----------:| | FP16 | 150 MiB | | FP32 | 200 MiB | Note that GPU memory usage only includes how much GPU memory the actual model consumes on an NVIDIA T4 GPU with a batch size of 32. It does not include the fix amount of memory that is consumed by the ONNX Runtime upon initialization which can be around 0.5 to 1 GiB depending on the used GPU. ## Requirements - Minimal Sinequa version: 11.10.0 - Minimal Sinequa version for using FP16 models and GPUs with CUDA compute capability of 8.9+ (like NVIDIA L4): 11.11.0 - [Cuda compute capability](https://developer.nvidia.com/cuda-gpus): above 5.0 (above 6.0 for FP16 use) ## Model Details ### Overview - Number of parameters: 11 million - Base language model: [English BERT-Mini](https://huggingface.co/google/bert_uncased_L-4_H-256_A-4) - Insensitive to casing and accents - Training procedure: [MonoBERT](https://arxiv.org/abs/1901.04085) ### Training Data - Probably-Asked Questions ([Paper](https://arxiv.org/abs/2102.07033), [Official Page](https://github.com/facebookresearch/PAQ)) ### Evaluation Metrics To determine the relevance score, we averaged the results that we obtained when evaluating on the datasets of the [BEIR benchmark](https://github.com/beir-cellar/beir). Note that all these datasets are in English. | Dataset | NDCG@10 | |:------------------|--------:| | Average | 0.438 | | | | | Arguana | 0.524 | | CLIMATE-FEVER | 0.150 | | DBPedia Entity | 0.338 | | FEVER | 0.706 | | FiQA-2018 | 0.269 | | HotpotQA | 0.630 | | MS MARCO | 0.328 | | NFCorpus | 0.340 | | NQ | 0.429 | | Quora | 0.722 | | SCIDOCS | 0.141 | | SciFact | 0.627 | | TREC-COVID | 0.628 | | Webis-Touche-2020 | 0.306 |