antoinelouis commited on
Commit
389c149
1 Parent(s): 420a6a5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -17
README.md CHANGED
@@ -38,8 +38,6 @@ embeddings = model.encode(sentences)
38
  print(embeddings)
39
  ```
40
 
41
-
42
-
43
  #### 🤗 Transformers
44
 
45
  Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
@@ -77,24 +75,21 @@ print("Sentence embeddings:")
77
  print(sentence_embeddings)
78
  ```
79
 
80
-
81
-
82
  ## Evaluation
83
  ***
84
 
85
  We evaluated our model on the smaller development set of mMARCO-fr, which consists of 6,980 queries for a corpus of 8.8M candidate passages. Below, we compared the model performance with other biencoder models fine-tuned on the same dataset. We report the mean reciprocal rank (MRR), normalized discounted cumulative gainand (NDCG), mean average precision (MAP), and recall at various cut-offs (R@k).
86
 
87
- | | model | Size | MRR@10 | NDCG@10 | MAP@10 | R@10 | R@100(↑) | R@500 |
88
- |---:|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------:|---------:|----------:|---------:|-------:|-----------:|--------:|
89
- | 1 | **biencoder-camembert-base-mmarcoFR** | 443MB | 28.53 | 33.72 | 27.93 | 51.46 | 77.82 | 89.13 |
90
- | 2 | [biencoder-all-mpnet-base-v2-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-all-mpnet-base-v2-mmarcoFR) | 438MB | 28.04 | 33.28 | 27.5 | 51.07 | 77.68 | 88.67 |
91
- | 3 | [biencoder-sentence-camembert-base-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-sentence-camembert-base-mmarcoFR) | 443MB | 27.63 | 32.7 | 27.01 | 50.10 | 76.85 | 88.73 |
92
- | 4 | [biencoder-distilcamembert-base-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-distilcamembert-base-mmarcoFR) | 272MB | 26.80 | 31.87 | 26.23 | 49.20 | 76.44 | 87.87 |
93
- | 5 | [biencoder-mMiniLMv2-L12-H384-distilled-from-XLMR-Large-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-mMiniLMv2-L12-H384-distilled-from-XLMR-Large-mmarcoFR) | 471MB | 24.74 | 29.41 | 24.23 | 45.40 | 71.52 | 84.42 |
94
- | 6 | [biencoder-camemberta-base-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-camemberta-base-mmarcoFR) | 447MB | 24.78 | 29.24 | 24.23 | 44.58 | 69.59 | 82.18 |
95
- | 7 | [biencoder-electra-base-french-europeana-cased-discriminator-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-electra-base-french-europeana-cased-discriminator-mmarcoFR) | 440MB | 23.38 | 27.97 | 22.91 | 43.50 | 68.96 | 81.61 |
96
- | 8 | [biencoder-mMiniLM-L6-v2-mmarco-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-mMiniLM-L6-v2-mmarco-mmarcoFR) | 428MB | 22.87 | 27.26 | 22.37 | 42.3 | 68.78 | 81.39 |
97
- | 9 | [biencoder-mMiniLMv2-L6-H384-distilled-from-XLMR-Large-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-mMiniLMv2-L6-H384-distilled-from-XLMR-Large-mmarcoFR) | 428MB | 22.29 | 26.57 | 21.8 | 41.25 | 66.78 | 79.83 |
98
 
99
  ## Training
100
  ***
@@ -116,8 +111,6 @@ We used the French version of the [mMARCO](https://huggingface.co/datasets/unica
116
  - a smaller dev set of 6,980 queries (which is actually used for evaluation in most published works).
117
  Link: [https://ir-datasets.com/mmarco.html#mmarco/v2/fr/](https://ir-datasets.com/mmarco.html#mmarco/v2/fr/)
118
 
119
-
120
-
121
  ## Citation
122
 
123
  ```bibtex
 
38
  print(embeddings)
39
  ```
40
 
 
 
41
  #### 🤗 Transformers
42
 
43
  Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
 
75
  print(sentence_embeddings)
76
  ```
77
 
 
 
78
  ## Evaluation
79
  ***
80
 
81
  We evaluated our model on the smaller development set of mMARCO-fr, which consists of 6,980 queries for a corpus of 8.8M candidate passages. Below, we compared the model performance with other biencoder models fine-tuned on the same dataset. We report the mean reciprocal rank (MRR), normalized discounted cumulative gainand (NDCG), mean average precision (MAP), and recall at various cut-offs (R@k).
82
 
83
+ | | model | Vocab. | #Param. | Size | MRR@10 | NDCG@10 | MAP@10 | R@10 | R@100(↑) | R@500 |
84
+ |---:|:------------------------------------------------------------------------------------------------------------------------|:-------|--------:|------:|---------:|----------:|---------:|-------:|-----------:|--------:|
85
+ | 1 | **biencoder-camembert-base-mmarcoFR** | 🇫🇷 | 110M | 443MB | 28.53 | 33.72 | 27.93 | 51.46 | 77.82 | 89.13 |
86
+ | 2 | [biencoder-mpnet-base-all-v2-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-mpnet-base-all-v2-mmarcoFR) | 🇬🇧 | 109M | 438MB | 28.04 | 33.28 | 27.50 | 51.07 | 77.68 | 88.67 |
87
+ | 3 | [biencoder-distilcamembert-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-distilcamembert-mmarcoFR) | 🇫🇷 | 68M | 272MB | 26.80 | 31.87 | 26.23 | 49.20 | 76.44 | 87.87 |
88
+ | 4 | [biencoder-MiniLM-L6-all-v2-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-MiniLM-L6-all-v2-mmarcoFR) | 🇬🇧 | 23M | 91MB | 25.49 | 30.39 | 24.99 | 47.10 | 73.48 | 86.09 |
89
+ | 5 | [biencoder-mMiniLMv2-L12-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-mMiniLMv2-L12-mmarcoFR) | 🇫🇷,99+ | 117M | 471MB | 24.74 | 29.41 | 24.23 | 45.40 | 71.52 | 84.42 |
90
+ | 6 | [biencoder-camemberta-base-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-camemberta-base-mmarcoFR) | 🇫🇷 | 112M | 447MB | 24.78 | 29.24 | 24.23 | 44.58 | 69.59 | 82.18 |
91
+ | 7 | [biencoder-electra-base-french-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-electra-base-french-mmarcoFR) | 🇫🇷 | 110M | 440MB | 23.38 | 27.97 | 22.91 | 43.50 | 68.96 | 81.61 |
92
+ | 8 | [biencoder-mMiniLMv2-L6-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-mMiniLMv2-L6-mmarcoFR) | 🇫🇷,99+ | 107M | 428MB | 22.29 | 26.57 | 21.80 | 41.25 | 66.78 | 79.83 |
 
93
 
94
  ## Training
95
  ***
 
111
  - a smaller dev set of 6,980 queries (which is actually used for evaluation in most published works).
112
  Link: [https://ir-datasets.com/mmarco.html#mmarco/v2/fr/](https://ir-datasets.com/mmarco.html#mmarco/v2/fr/)
113
 
 
 
114
  ## Citation
115
 
116
  ```bibtex