antoinelouis commited on
Commit
a8ffc5b
1 Parent(s): ef5311d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -26
README.md CHANGED
@@ -44,7 +44,7 @@ model-index:
44
 
45
  # biencoder-camembert-base-mmarcoFR
46
 
47
- This is a dense single-vector bi-encoder model. It maps sentences and paragraphs to a 768 dimensional dense vector space and should be used for semantic search. The model was trained on the **French** portion of the [mMARCO](https://huggingface.co/datasets/unicamp-dl/mmarco) retrieval dataset.
48
 
49
  ## Usage
50
 
@@ -121,24 +121,11 @@ similarity = q_embeddings @ p_embeddings.T
121
  print(similarity)
122
  ```
123
 
124
- ***
125
-
126
  ## Evaluation
127
 
128
- We evaluate the model on the smaller development set of [mMARCO-fr](https://ir-datasets.com/mmarco.html#mmarco/v2/fr/), which consists of 6,980 queries for a corpus of 8.8M candidate passages. Below, we compare the model performance with other biencoder models fine-tuned on the same dataset. We report the mean reciprocal rank (MRR), normalized discounted cumulative gainand (NDCG), mean average precision (MAP), and recall at various cut-offs (R@k).
129
-
130
- | | model | Vocab. | #Param. | Size | MRR@10 | NDCG@10 | MAP@10 | R@10 | R@100() | R@500 |
131
- |---:|:------------------------------------------------------------------------------------------------------------------------|:-------|--------:|------:|---------:|----------:|---------:|-------:|-----------:|--------:|
132
- | 1 | **biencoder-camembert-base-mmarcoFR** | 🇫🇷 | 110M | 443MB | 28.53 | 33.72 | 27.93 | 51.46 | 77.82 | 89.13 |
133
- | 2 | [biencoder-mpnet-base-all-v2-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-mpnet-base-all-v2-mmarcoFR) | 🇬🇧 | 109M | 438MB | 28.04 | 33.28 | 27.50 | 51.07 | 77.68 | 88.67 |
134
- | 3 | [biencoder-distilcamembert-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-distilcamembert-mmarcoFR) | 🇫🇷 | 68M | 272MB | 26.80 | 31.87 | 26.23 | 49.20 | 76.44 | 87.87 |
135
- | 4 | [biencoder-MiniLM-L6-all-v2-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-MiniLM-L6-all-v2-mmarcoFR) | 🇬🇧 | 23M | 91MB | 25.49 | 30.39 | 24.99 | 47.10 | 73.48 | 86.09 |
136
- | 5 | [biencoder-mMiniLMv2-L12-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-mMiniLMv2-L12-mmarcoFR) | 🇫🇷,99+ | 117M | 471MB | 24.74 | 29.41 | 24.23 | 45.40 | 71.52 | 84.42 |
137
- | 6 | [biencoder-camemberta-base-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-camemberta-base-mmarcoFR) | 🇫🇷 | 112M | 447MB | 24.78 | 29.24 | 24.23 | 44.58 | 69.59 | 82.18 |
138
- | 7 | [biencoder-electra-base-french-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-electra-base-french-mmarcoFR) | 🇫🇷 | 110M | 440MB | 23.38 | 27.97 | 22.91 | 43.50 | 68.96 | 81.61 |
139
- | 8 | [biencoder-mMiniLMv2-L6-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-mMiniLMv2-L6-mmarcoFR) | 🇫🇷,99+ | 107M | 428MB | 22.29 | 26.57 | 21.80 | 41.25 | 66.78 | 79.83 |
140
-
141
- ***
142
 
143
  ## Training
144
 
@@ -150,17 +137,15 @@ We use the French training samples from the [mMARCO](https://huggingface.co/data
150
 
151
  The model is initialized from the [camembert-base](https://huggingface.co/camembert-base) checkpoint and optimized via the cross-entropy loss (as in [DPR](https://doi.org/10.48550/arXiv.2004.04906)) with a temperature of 0.05. It is fine-tuned on one 32GB NVIDIA V100 GPU for 20 epochs (i.e., 65.7k steps) using the AdamW optimizer with a batch size of 152, a peak learning rate of 2e-5 with warm up along the first 500 steps and linear scheduling. We set the maximum sequence lengths for both the questions and passages to 128 tokens. We use the cosine similarity to compute relevance scores.
152
 
153
- ***
154
-
155
  ## Citation
156
 
157
  ```bibtex
158
- @online{louis2023,
159
- author = 'Antoine Louis',
160
- title = 'biencoder-camembert-base-mmarcoFR: A Biencoder Model Trained on French mMARCO',
161
- publisher = 'Hugging Face',
162
- month = 'may',
163
- year = '2023',
164
- url = 'https://huggingface.co/antoinelouis/biencoder-camembert-base-mmarcoFR',
165
  }
166
  ```
 
44
 
45
  # biencoder-camembert-base-mmarcoFR
46
 
47
+ This is a dense single-vector bi-encoder model for **French** that can be used for semantic search. The model maps queries and passages to 768-dimensional dense vectors which are used to compute relevance through cosine similarity.
48
 
49
  ## Usage
50
 
 
121
  print(similarity)
122
  ```
123
 
 
 
124
  ## Evaluation
125
 
126
+ The model is evaluated on the smaller development set of [mMARCO-fr](https://ir-datasets.com/mmarco.html#mmarco/v2/fr/), which consists of 6,980 queries for a corpus of
127
+ 8.8M candidate passages. We report the mean reciprocal rank (MRR), normalized discounted cumulative gainand (NDCG), mean average precision (MAP), and recall at various cut-offs (R@k).
128
+ To see how it compares to other neural retrievers in French, check out the [*DécouvrIR*](https://huggingface.co/spaces/antoinelouis/decouvrir) leaderboard.
 
 
 
 
 
 
 
 
 
 
 
129
 
130
  ## Training
131
 
 
137
 
138
  The model is initialized from the [camembert-base](https://huggingface.co/camembert-base) checkpoint and optimized via the cross-entropy loss (as in [DPR](https://doi.org/10.48550/arXiv.2004.04906)) with a temperature of 0.05. It is fine-tuned on one 32GB NVIDIA V100 GPU for 20 epochs (i.e., 65.7k steps) using the AdamW optimizer with a batch size of 152, a peak learning rate of 2e-5 with warm up along the first 500 steps and linear scheduling. We set the maximum sequence lengths for both the questions and passages to 128 tokens. We use the cosine similarity to compute relevance scores.
139
 
 
 
140
  ## Citation
141
 
142
  ```bibtex
143
+ @online{louis2024decouvrir,
144
+ author = 'Antoine Louis',
145
+ title = 'DécouvrIR: A Benchmark for Evaluating the Robustness of Information Retrieval Models in French',
146
+ publisher = 'Hugging Face',
147
+ month = 'mar',
148
+ year = '2024',
149
+ url = 'https://huggingface.co/spaces/antoinelouis/decouvrir',
150
  }
151
  ```