antoinelouis commited on
Commit
6477cf0
1 Parent(s): da6205e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +56 -39
README.md CHANGED
@@ -1,96 +1,113 @@
1
  ---
2
- pipeline_tag: sentence-similarity
3
  language: fr
4
- license: apache-2.0
5
  datasets:
6
  - unicamp-dl/mmarco
7
  metrics:
8
  - recall
9
  tags:
10
- - sentence-similarity
11
  library_name: sentence-transformers
 
12
  ---
13
- # crossencoder-mMiniLMv2-L6-mmarcoFR
14
 
15
- This is a [sentence-transformers](https://www.SBERT.net) model trained on the **French** portion of the [mMARCO](https://huggingface.co/datasets/unicamp-dl/mmarco) dataset.
16
 
17
- It performs cross-attention between a question-passage pair and outputs a relevance score between 0 and 1. The model can be used for tasks like clustering or [semantic search]((https://www.sbert.net/examples/applications/retrieve_rerank/README.html): given a query, encode the latter with some candidate passages -- e.g., retrieved with BM25 or a biencoder -- then sort the passages in a decreasing order of relevance according to the model's predictions.
 
 
 
18
 
19
  ## Usage
20
- ***
21
-
22
- #### Sentence-Transformers
23
 
24
- Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
25
 
26
- ```bash
27
- pip install -U sentence-transformers
28
- ```
29
 
30
- Then you can use the model like this:
31
 
32
  ```python
33
  from sentence_transformers import CrossEncoder
34
- pairs = [('Query', 'Paragraph1'), ('Query', 'Paragraph2') , ('Query', 'Paragraph3')]
 
35
 
36
  model = CrossEncoder('antoinelouis/crossencoder-mMiniLMv2-L6-mmarcoFR')
37
  scores = model.predict(pairs)
38
  print(scores)
39
  ```
40
 
41
- #### 🤗 Transformers
42
 
43
- Without [sentence-transformers](https://www.SBERT.net), you can use the model as follows:
44
 
45
  ```python
46
- from transformers import AutoTokenizer, AutoModelForSequenceClassification
47
- import torch
48
 
49
- model = AutoModelForSequenceClassification.from_pretrained('antoinelouis/crossencoder-mMiniLMv2-L6-mmarcoFR')
50
- tokenizer = AutoTokenizer.from_pretrained('antoinelouis/crossencoder-mMiniLMv2-L6-mmarcoFR')
 
 
 
 
 
 
 
 
51
 
52
- pairs = [('Query', 'Paragraph1'), ('Query', 'Paragraph2') , ('Query', 'Paragraph3')]
53
- features = tokenizer(pairs, padding=True, truncation=True, return_tensors='pt')
 
 
 
54
 
 
 
55
  model.eval()
 
56
  with torch.no_grad():
57
- scores = model(**features).logits
 
58
  print(scores)
59
  ```
60
 
61
- ## Evaluation
62
  ***
63
 
64
- We evaluated the model on 500 random queries from the mMARCO-fr train set (which were excluded from training). Each of these queries has at least one relevant and up to 200 irrelevant passages.
65
 
66
- Below, we compare the model performance with other cross-encoder models fine-tuned on the same dataset. We report the R-precision (RP), mean reciprocal rank (MRR), and recall at various cut-offs (R@k).
 
 
67
 
68
  | | model | Vocab. | #Param. | Size | RP | MRR@10 | R@10(↑) | R@20 | R@50 | R@100 |
69
  |---:|:-----------------------------------------------------------------------------------------------------------------------------|:-------|--------:|------:|-------:|---------:|---------:|-------:|-------:|--------:|
70
  | 1 | [crossencoder-camembert-base-mmarcoFR](https://huggingface.co/antoinelouis/crossencoder-camembert-base-mmarcoFR) | fr | 110M | 443MB | 35.65 | 50.44 | 82.95 | 91.50 | 96.80 | 98.80 |
71
  | 2 | [crossencoder-mMiniLMv2-L12-mmarcoFR](https://huggingface.co/antoinelouis/crossencoder-mMiniLMv2-L12-mmarcoFR) | fr,99+ | 118M | 471MB | 34.37 | 51.01 | 82.23 | 90.60 | 96.45 | 98.40 |
72
- | 3 | [crossencoder-mpnet-base-mmarcoFR](https://huggingface.co/antoinelouis/crossencoder-mpnet-base-mmarcoFR) | en | 109M | 438MB | 29.68 | 46.13 | 80.45 | 87.90 | 93.15 | 96.60 |
73
- | 4 | [crossencoder-distilcamembert-mmarcoFR](https://huggingface.co/antoinelouis/crossencoder-distilcamembert-mmarcoFR) | fr | 68M | 272MB | 27.28 | 43.71 | 80.30 | 89.10 | 95.55 | 98.60 |
74
- | 5 | [crossencoder-electra-base-french-mmarcoFR](https://huggingface.co/antoinelouis/crossencoder-electra-base-french-mmarcoFR) | fr | 110M | 443MB | 28.32 | 45.28 | 79.22 | 87.15 | 93.15 | 95.75 |
75
- | 6 | **crossencoder-mMiniLMv2-L6-mmarcoFR** | fr,99+ | 107M | 428MB | 33.92 | 49.33 | 79.00 | 88.35 | 94.80 | 98.20 |
76
 
77
- ## Training
78
  ***
79
 
80
- #### Background
81
 
82
- We used the [nreimers/mMiniLMv2-L6-H384-distilled-from-XLMR-Large](https://huggingface.co/nreimers/mMiniLMv2-L6-H384-distilled-from-XLMR-Large) model and fine-tuned it with a binary cross-entropy loss function on 1M question-passage pairs in French with a positive-to-negative ratio of 4 (i.e., 25% of the pairs are relevant and 75% are irrelevant).
83
 
84
- #### Hyperparameters
 
 
 
85
 
86
- We trained the model on a single Tesla V100 GPU with 32GBs of memory during 10 epochs (i.e., 312.4k steps) using a batch size of 32. We used the adamw optimizer with an initial learning rate of 2e-05, weight decay of 0.01, learning rate warmup over the first 500 steps, and linear decay of the learning rate. The sequence length was limited to 512 tokens.
87
 
88
- #### Data
 
 
 
89
 
90
- We used the French version of the [mMARCO](https://huggingface.co/datasets/unicamp-dl/mmarco) dataset to fine-tune our model. mMARCO is a multi-lingual machine-translated version of the MS MARCO dataset, a popular large-scale IR dataset.
91
 
92
  ## Citation
93
- ***
94
 
95
  ```bibtex
96
  @online{louis2023,
 
1
  ---
2
+ pipeline_tag: text-classification
3
  language: fr
4
+ license: mit
5
  datasets:
6
  - unicamp-dl/mmarco
7
  metrics:
8
  - recall
9
  tags:
10
+ - passage-reranking
11
  library_name: sentence-transformers
12
+ base_model: nreimers/mMiniLMv2-L6-H384-distilled-from-XLMR-Large
13
  ---
 
14
 
15
+ # crossencoder-mMiniLMv2-L6-mmarcoFR
16
 
17
+ This is a cross-encoder model for French. It performs cross-attention between a question-passage pair and outputs a relevance score.
18
+ The model should be used as a reranker for semantic search: given a query and a set of potentially relevant passages retrieved by an efficient first-stage
19
+ retrieval system (e.g., BM25 or a fine-tuned dense single-vector bi-encoder), encode each query-passage pair and sort the passages in a decreasing order of
20
+ relevance according to the model's predicted scores.
21
 
22
  ## Usage
 
 
 
23
 
24
+ Here are some examples for using the model with [Sentence-Transformers](#using-sentence-transformers), [FlagEmbedding](#using-flagembedding), or [Huggingface Transformers](#using-huggingface-transformers).
25
 
26
+ #### Using Sentence-Transformers
 
 
27
 
28
+ Start by installing the [library](https://www.SBERT.net): `pip install -U sentence-transformers`. Then, you can use the model like this:
29
 
30
  ```python
31
  from sentence_transformers import CrossEncoder
32
+
33
+ pairs = [('Question', 'Paragraphe 1'), ('Question', 'Paragraphe 2') , ('Question', 'Paragraphe 3')]
34
 
35
  model = CrossEncoder('antoinelouis/crossencoder-mMiniLMv2-L6-mmarcoFR')
36
  scores = model.predict(pairs)
37
  print(scores)
38
  ```
39
 
40
+ #### Using FlagEmbedding
41
 
42
+ Start by installing the [library](https://github.com/FlagOpen/FlagEmbedding/): `pip install -U FlagEmbedding`. Then, you can use the model like this:
43
 
44
  ```python
45
+ from FlagEmbedding import FlagReranker
 
46
 
47
+ pairs = [('Question', 'Paragraphe 1'), ('Question', 'Paragraphe 2') , ('Question', 'Paragraphe 3')]
48
+
49
+ reranker = FlagReranker('antoinelouis/crossencoder-mMiniLMv2-L6-mmarcoFR')
50
+ scores = reranker.compute_score(pairs)
51
+ print(scores)
52
+ ```
53
+
54
+ #### Using HuggingFace Transformers
55
+
56
+ Start by installing the [library](https://huggingface.co/docs/transformers): `pip install -U transformers`. Then, you can use the model like this:
57
 
58
+ ```python
59
+ import torch
60
+ from transformers import AutoTokenizer, AutoModelForSequenceClassification
61
+
62
+ pairs = [('Question', 'Paragraphe 1'), ('Question', 'Paragraphe 2') , ('Question', 'Paragraphe 3')]
63
 
64
+ tokenizer = AutoTokenizer.from_pretrained('antoinelouis/crossencoder-mMiniLMv2-L6-mmarcoFR')
65
+ model = AutoModelForSequenceClassification.from_pretrained('antoinelouis/crossencoder-mMiniLMv2-L6-mmarcoFR')
66
  model.eval()
67
+
68
  with torch.no_grad():
69
+ inputs = tokenizer(pairs, padding=True, truncation=True, return_tensors='pt', max_length=512)
70
+ scores = model(**inputs, return_dict=True).logits.view(-1, ).float()
71
  print(scores)
72
  ```
73
 
 
74
  ***
75
 
76
+ ## Evaluation
77
 
78
+ We evaluate the model on 500 random training queries from [mMARCO-fr](https://ir-datasets.com/mmarco.html#mmarco/v2/fr/) (which were excluded from training) by reranking
79
+ subsets of candidate passages comprising of at least one relevant and up to 200 BM25 negative passages for each query. Below, we compare the model performance with other
80
+ cross-encoder models fine-tuned on the same dataset. We report the R-precision (RP), mean reciprocal rank (MRR), and recall at various cut-offs (R@k).
81
 
82
  | | model | Vocab. | #Param. | Size | RP | MRR@10 | R@10(↑) | R@20 | R@50 | R@100 |
83
  |---:|:-----------------------------------------------------------------------------------------------------------------------------|:-------|--------:|------:|-------:|---------:|---------:|-------:|-------:|--------:|
84
  | 1 | [crossencoder-camembert-base-mmarcoFR](https://huggingface.co/antoinelouis/crossencoder-camembert-base-mmarcoFR) | fr | 110M | 443MB | 35.65 | 50.44 | 82.95 | 91.50 | 96.80 | 98.80 |
85
  | 2 | [crossencoder-mMiniLMv2-L12-mmarcoFR](https://huggingface.co/antoinelouis/crossencoder-mMiniLMv2-L12-mmarcoFR) | fr,99+ | 118M | 471MB | 34.37 | 51.01 | 82.23 | 90.60 | 96.45 | 98.40 |
86
+ | 3 | [crossencoder-distilcamembert-mmarcoFR](https://huggingface.co/antoinelouis/crossencoder-distilcamembert-mmarcoFR) | fr | 68M | 272MB | 27.28 | 43.71 | 80.30 | 89.10 | 95.55 | 98.60 |
87
+ | 4 | [crossencoder-electra-base-french-mmarcoFR](https://huggingface.co/antoinelouis/crossencoder-electra-base-french-mmarcoFR) | fr | 110M | 443MB | 28.32 | 45.28 | 79.22 | 87.15 | 93.15 | 95.75 |
88
+ | 5 | **crossencoder-mMiniLMv2-L6-mmarcoFR** | fr,99+ | 107M | 428MB | 33.92 | 49.33 | 79.00 | 88.35 | 94.80 | 98.20 |
 
89
 
 
90
  ***
91
 
92
+ ## Training
93
 
94
+ #### Data
95
 
96
+ We use the French training samples from the [mMARCO](https://huggingface.co/datasets/unicamp-dl/mmarco) dataset, a multilingual machine-translated version of MS MARCO
97
+ that contains 8.8M passages and 539K training queries. We sample 1M question-passage pairs from the official ~39.8M
98
+ [training triples](https://microsoft.github.io/msmarco/Datasets.html#passage-ranking-dataset) with a positive-to-negative ratio of 4 (i.e., 25% of the pairs are
99
+ relevant and 75% are irrelevant).
100
 
101
+ #### Implementation
102
 
103
+ The model is initialized from the [nreimers/mMiniLMv2-L6-H384-distilled-from-XLMR-Large](https://huggingface.co/nreimers/mMiniLMv2-L6-H384-distilled-from-XLMR-Large) checkpoint and optimized via the binary cross-entropy loss
104
+ (as in [monoBERT](https://doi.org/10.48550/arXiv.1910.14424)). It is fine-tuned on one 32GB NVIDIA V100 GPU for 10 epochs (i.e., 312.4k steps) using the AdamW optimizer
105
+ with a batch size of 32, a peak learning rate of 2e-5 with warm up along the first 500 steps and linear scheduling. We set the maximum sequence length of the
106
+ concatenated question-passage pairs to 512 tokens. We use the sigmoid function to get scores between 0 and 1.
107
 
108
+ ***
109
 
110
  ## Citation
 
111
 
112
  ```bibtex
113
  @online{louis2023,