antoinelouis commited on
Commit
0114c8e
1 Parent(s): fd63314

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -1
README.md CHANGED
@@ -9,6 +9,7 @@ metrics:
9
  tags:
10
  - passage-retrieval
11
  library_name: sentence-transformers
 
12
  model-index:
13
  - name: biencoder-distilcamembert-mmarcoFR
14
  results:
@@ -148,7 +149,7 @@ We use the French training samples from the [mMARCO](https://huggingface.co/data
148
 
149
  #### Implementation
150
 
151
- The model is initialized from the [distilcamembert-base](https://huggingface.co/cmarkea/distilcamembert-base) checkpoint and optimized via the cross-entropy loss (as in [DPR](https://doi.org/10.48550/arXiv.2004.04906)) with a temperature of 0.05. It is fine-tuned on one 32GB NVIDIA V100 GPU for 20 epochs (i.e., 65.7k steps) using the AdamW optimizer with a batch size of 152, a peak learning rate of 2e-5 with warm up along the first 500 steps and linear scheduling. We set the maximum sequence lengths for both the questions and passages to 128 tokens. We use the cosine similarity to compute relevance scores.
152
 
153
  ***
154
 
 
9
  tags:
10
  - passage-retrieval
11
  library_name: sentence-transformers
12
+ base_model: cmarkea/distilcamembert-base
13
  model-index:
14
  - name: biencoder-distilcamembert-mmarcoFR
15
  results:
 
149
 
150
  #### Implementation
151
 
152
+ The model is initialized from the [cmarkea/distilcamembert-base](https://huggingface.co/cmarkea/distilcamembert-base) checkpoint and optimized via the cross-entropy loss (as in [DPR](https://doi.org/10.48550/arXiv.2004.04906)) with a temperature of 0.05. It is fine-tuned on one 32GB NVIDIA V100 GPU for 20 epochs (i.e., 65.7k steps) using the AdamW optimizer with a batch size of 152, a peak learning rate of 2e-5 with warm up along the first 500 steps and linear scheduling. We set the maximum sequence lengths for both the questions and passages to 128 tokens. We use the cosine similarity to compute relevance scores.
153
 
154
  ***
155