antoinelouis commited on
Commit
d6dbdb4
1 Parent(s): f3a79d2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -19
README.md CHANGED
@@ -12,7 +12,7 @@ tags:
12
  library_name: sentence-transformers
13
  ---
14
 
15
- # biencoder-electra-base-french-europeana-cased-discriminator-mmarcoFR
16
 
17
  This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. The model was trained on the **French** portion of the [mMARCO](https://huggingface.co/datasets/unicamp-dl/mmarco) dataset.
18
 
@@ -33,7 +33,7 @@ Then you can use the model like this:
33
  from sentence_transformers import SentenceTransformer
34
  sentences = ["This is an example sentence", "Each sentence is converted"]
35
 
36
- model = SentenceTransformer('antoinelouis/biencoder-electra-base-french-europeana-cased-discriminator-mmarcoFR')
37
  embeddings = model.encode(sentences)
38
  print(embeddings)
39
  ```
@@ -58,8 +58,8 @@ def mean_pooling(model_output, attention_mask):
58
  sentences = ['This is an example sentence', 'Each sentence is converted']
59
 
60
  # Load model from HuggingFace Hub
61
- tokenizer = AutoTokenizer.from_pretrained('antoinelouis/biencoder-electra-base-french-europeana-cased-discriminator-mmarcoFR')
62
- model = AutoModel.from_pretrained('antoinelouis/biencoder-electra-base-french-europeana-cased-discriminator-mmarcoFR')
63
 
64
  # Tokenize sentences
65
  encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
@@ -80,17 +80,16 @@ print(sentence_embeddings)
80
 
81
  We evaluated our model on the smaller development set of mMARCO-fr, which consists of 6,980 queries for a corpus of 8.8M candidate passages. Below, we compared the model performance with other biencoder models fine-tuned on the same dataset. We report the mean reciprocal rank (MRR), normalized discounted cumulative gainand (NDCG), mean average precision (MAP), and recall at various cut-offs (R@k).
82
 
83
- | | model | Size | MRR@10 | NDCG@10 | MAP@10 | R@10 | R@100(↑) | R@500 |
84
- |---:|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------:|---------:|----------:|---------:|-------:|-----------:|--------:|
85
- | 1 | [biencoder-camembert-base-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-camembert-base-mmarcoFR) | 443MB | 28.53 | 33.72 | 27.93 | 51.46 | 77.82 | 89.13 |
86
- | 2 | [biencoder-all-mpnet-base-v2-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-all-mpnet-base-v2-mmarcoFR) | 438MB | 28.04 | 33.28 | 27.5 | 51.07 | 77.68 | 88.67 |
87
- | 3 | [biencoder-sentence-camembert-base-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-sentence-camembert-base-mmarcoFR) | 443MB | 27.63 | 32.7 | 27.01 | 50.10 | 76.85 | 88.73 |
88
- | 4 | [biencoder-distilcamembert-base-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-distilcamembert-base-mmarcoFR) | 272MB | 26.80 | 31.87 | 26.23 | 49.20 | 76.44 | 87.87 |
89
- | 5 | [biencoder-mMiniLMv2-L12-H384-distilled-from-XLMR-Large-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-mMiniLMv2-L12-H384-distilled-from-XLMR-Large-mmarcoFR) | 471MB | 24.74 | 29.41 | 24.23 | 45.40 | 71.52 | 84.42 |
90
- | 6 | [biencoder-camemberta-base-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-camemberta-base-mmarcoFR) | 447MB | 24.78 | 29.24 | 24.23 | 44.58 | 69.59 | 82.18 |
91
- | 7 | **biencoder-electra-base-french-europeana-cased-discriminator-mmarcoFR** | 440MB | 23.38 | 27.97 | 22.91 | 43.50 | 68.96 | 81.61 |
92
- | 8 | [biencoder-mMiniLM-L6-v2-mmarco-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-mMiniLM-L6-v2-mmarco-mmarcoFR) | 428MB | 22.87 | 27.26 | 22.37 | 42.3 | 68.78 | 81.39 |
93
- | 9 | [biencoder-mMiniLMv2-L6-H384-distilled-from-XLMR-Large-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-mMiniLMv2-L6-H384-distilled-from-XLMR-Large-mmarcoFR) | 428MB | 22.29 | 26.57 | 21.8 | 41.25 | 66.78 | 79.83 |
94
 
95
  ## Training
96
  ***
@@ -112,17 +111,15 @@ We used the French version of the [mMARCO](https://huggingface.co/datasets/unica
112
  - a smaller dev set of 6,980 queries (which is actually used for evaluation in most published works).
113
  Link: [https://ir-datasets.com/mmarco.html#mmarco/v2/fr/](https://ir-datasets.com/mmarco.html#mmarco/v2/fr/)
114
 
115
-
116
-
117
  ## Citation
118
 
119
  ```bibtex
120
  @online{louis2023,
121
  author = 'Antoine Louis',
122
- title = 'biencoder-electra-base-french-europeana-cased-discriminator-mmarcoFR: A Biencoder Model Trained on French mMARCO',
123
  publisher = 'Hugging Face',
124
  month = 'may',
125
  year = '2023',
126
- url = 'https://huggingface.co/antoinelouis/biencoder-electra-base-french-europeana-cased-discriminator-mmarcoFR',
127
  }
128
  ```
 
12
  library_name: sentence-transformers
13
  ---
14
 
15
+ # biencoder-electra-base-french-mmarcoFR
16
 
17
  This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. The model was trained on the **French** portion of the [mMARCO](https://huggingface.co/datasets/unicamp-dl/mmarco) dataset.
18
 
 
33
  from sentence_transformers import SentenceTransformer
34
  sentences = ["This is an example sentence", "Each sentence is converted"]
35
 
36
+ model = SentenceTransformer('antoinelouis/biencoder-electra-base-french-mmarcoFR')
37
  embeddings = model.encode(sentences)
38
  print(embeddings)
39
  ```
 
58
  sentences = ['This is an example sentence', 'Each sentence is converted']
59
 
60
  # Load model from HuggingFace Hub
61
+ tokenizer = AutoTokenizer.from_pretrained('antoinelouis/biencoder-electra-base-french-mmarcoFR')
62
+ model = AutoModel.from_pretrained('antoinelouis/biencoder-electra-base-french-mmarcoFR')
63
 
64
  # Tokenize sentences
65
  encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
 
80
 
81
  We evaluated our model on the smaller development set of mMARCO-fr, which consists of 6,980 queries for a corpus of 8.8M candidate passages. Below, we compared the model performance with other biencoder models fine-tuned on the same dataset. We report the mean reciprocal rank (MRR), normalized discounted cumulative gainand (NDCG), mean average precision (MAP), and recall at various cut-offs (R@k).
82
 
83
+ | | model | Vocab. | #Param. | Size | MRR@10 | NDCG@10 | MAP@10 | R@10 | R@100(↑) | R@500 |
84
+ |---:|:------------------------------------------------------------------------------------------------------------------------|:-------|--------:|------:|---------:|----------:|---------:|-------:|-----------:|--------:|
85
+ | 1 | [biencoder-camembert-base-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-camembert-base-mmarcoFR) | 🇫🇷 | 110M | 443MB | 28.53 | 33.72 | 27.93 | 51.46 | 77.82 | 89.13 |
86
+ | 2 | [biencoder-mpnet-base-all-v2-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-mpnet-base-all-v2-mmarcoFR) | 🇬🇧 | 109M | 438MB | 28.04 | 33.28 | 27.50 | 51.07 | 77.68 | 88.67 |
87
+ | 3 | [biencoder-distilcamembert-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-distilcamembert-mmarcoFR) | 🇫🇷 | 68M | 272MB | 26.80 | 31.87 | 26.23 | 49.20 | 76.44 | 87.87 |
88
+ | 4 | [biencoder-MiniLM-L6-all-v2-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-MiniLM-L6-all-v2-mmarcoFR) | 🇬🇧 | 23M | 91MB | 25.49 | 30.39 | 24.99 | 47.10 | 73.48 | 86.09 |
89
+ | 5 | [biencoder-mMiniLMv2-L12-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-mMiniLMv2-L12-mmarcoFR) | 🇫🇷,99+ | 117M | 471MB | 24.74 | 29.41 | 24.23 | 45.40 | 71.52 | 84.42 |
90
+ | 6 | [biencoder-camemberta-base-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-camemberta-base-mmarcoFR) | 🇫🇷 | 112M | 447MB | 24.78 | 29.24 | 24.23 | 44.58 | 69.59 | 82.18 |
91
+ | 7 | **biencoder-electra-base-french-mmarcoFR** | 🇫🇷 | 110M | 440MB | 23.38 | 27.97 | 22.91 | 43.50 | 68.96 | 81.61 |
92
+ | 8 | [biencoder-mMiniLMv2-L6-mmarcoFR](https://huggingface.co/antoinelouis/biencoder-mMiniLMv2-L6-mmarcoFR) | 🇫🇷,99+ | 107M | 428MB | 22.29 | 26.57 | 21.80 | 41.25 | 66.78 | 79.83 |
 
93
 
94
  ## Training
95
  ***
 
111
  - a smaller dev set of 6,980 queries (which is actually used for evaluation in most published works).
112
  Link: [https://ir-datasets.com/mmarco.html#mmarco/v2/fr/](https://ir-datasets.com/mmarco.html#mmarco/v2/fr/)
113
 
 
 
114
  ## Citation
115
 
116
  ```bibtex
117
  @online{louis2023,
118
  author = 'Antoine Louis',
119
+ title = 'biencoder-electra-base-french-mmarcoFR: A Biencoder Model Trained on French mMARCO',
120
  publisher = 'Hugging Face',
121
  month = 'may',
122
  year = '2023',
123
+ url = 'https://huggingface.co/antoinelouis/biencoder-electra-base-french-mmarcoFR',
124
  }
125
  ```