--- pipeline_tag: sentence-similarity tags: - feature-extraction - sentence-similarity language: - de - en - es - fr - it - nl - ja - pt - zh - pl --- # Model Card for `vectorizer.hazelnut` This model is a vectorizer developed by Sinequa. It produces an embedding vector given a passage or a query. The passage vectors are stored in our vector index and the query vector is used at query time to look up relevant passages in the index. Model name: `vectorizer.hazelnut` ## Supported Languages The model was trained and tested in the following languages: - English - French - German - Spanish - Italian - Dutch - Japanese - Portuguese - Chinese (simplified) - Polish Besides these languages, basic support can be expected for additional 91 languages that were used during the pretraining of the base model (see Appendix A of XLM-R paper). ## Scores | Metric | Value | |:-------------------------------|------:| | English Relevance (Recall@100) | 0.590 | | Polish Relevance (Recall@100) | 0.543 | Note that the relevance scores are computed as an average over several retrieval datasets (see [details below](#evaluation-metrics)). ## Inference Times | GPU | Quantization type | Batch size 1 | Batch size 32 | |:------------------------------------------|:------------------|---------------:|---------------:| | NVIDIA A10 | FP16 | 1 ms | 5 ms | | NVIDIA A10 | FP32 | 2 ms | 18 ms | | NVIDIA T4 | FP16 | 1 ms | 12 ms | | NVIDIA T4 | FP32 | 3 ms | 52 ms | | NVIDIA L4 | FP16 | 2 ms | 5 ms | | NVIDIA L4 | FP32 | 4 ms | 24 ms | ## Gpu Memory usage | Quantization type | Memory | |:-------------------------------------------------|-----------:| | FP16 | 550 MiB | | FP32 | 1050 MiB | Note that GPU memory usage only includes how much GPU memory the actual model consumes on an NVIDIA T4 GPU with a batch size of 32. It does not include the fix amount of memory that is consumed by the ONNX Runtime upon initialization which can be around 0.5 to 1 GiB depending on the used GPU. ## Requirements - Minimal Sinequa version: 11.10.0 - Minimal Sinequa version for using FP16 models and GPUs with CUDA compute capability of 8.9+ (like NVIDIA L4): 11.11.0 - [Cuda compute capability](https://developer.nvidia.com/cuda-gpus): above 5.0 (above 6.0 for FP16 use) ## Model Details ### Overview - Number of parameters: 107 million - Base language model: [mMiniLMv2-L6-H384-distilled-from-XLMR-Large](https://huggingface.co/nreimers/mMiniLMv2-L6-H384-distilled-from-XLMR-Large) ([Paper](https://arxiv.org/abs/2012.15828), [GitHub](https://github.com/microsoft/unilm/tree/master/minilm)) - Insensitive to casing and accents - Output dimensions: 256 (reduced with an additional dense layer) - Training procedure: Query-passage-negative triplets for datasets that have mined hard negative data, Query-passage pairs for the rest. Number of negatives is augmented with in-batch negative strategy ### Training Data The model have been trained using all datasets that are cited in the [all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) model. In addition to that, this model has been trained on the datasets cited in [this paper](https://arxiv.org/pdf/2108.13897.pdf) on the first 9 aforementioned languages. It has also been trained on [this dataset](https://huggingface.co/datasets/clarin-knext/msmarco-pl) for polish capacities. ### Evaluation Metrics #### English To determine the relevance score, we averaged the results that we obtained when evaluating on the datasets of the [BEIR benchmark](https://github.com/beir-cellar/beir). Note that all these datasets are in **English**. | Dataset | Recall@100 | |:------------------|-----------:| | Average | 0.590 | | | | | Arguana | 0.961 | | CLIMATE-FEVER | 0.432 | | DBPedia Entity | 0.371 | | FEVER | 0.723 | | FiQA-2018 | 0.611 | | HotpotQA | 0.564 | | MS MARCO | 0.825 | | NFCorpus | 0.266 | | NQ | 0.722 | | Quora | 0.991 | | SCIDOCS | 0.426 | | SciFact | 0.864 | | TREC-COVID | 0.092 | | Webis-Touche-2020 | 0.415 | #### Polish This model has polish capacities, that are being evaluated over a subset of the [PIRBenchmark](https://github.com/sdadas/pirb). | Dataset | Recall@100 | |:------------------|-----------:| | Average | 0.534 | | | | | arguana-pl | 0.909 | | dbpedia-pl | 0.282 | | fiqa-pl | 0.439 | | hotpotqa-pl | 0.530 | | msmarco-pl | 0.694 | | nfcorpus-pl | 0.218 | | nq-pl | 0.697 | | quora-pl | 0.949 | | scidocs-pl | 0.291 | | scifact-pl | 0.805 | | trec-covid-pl | 0.059 | #### Other languages We evaluated the model on the datasets of the [MIRACL benchmark](https://github.com/project-miracl/miracl) to test its multilingual capacities. Note that not all training languages are part of the benchmark, so we only report the metrics for the existing languages. | Language | Recall@100 | |:----------------------|-----------:| | French | 0.649 | | German | 0.598 | | Spanish | 0.609 | | Japanese | 0.623 | | Chinese (simplified) | 0.707 |