MrLight commited on
Commit
2c87485
1 Parent(s): 6fd62bc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -3,12 +3,12 @@ license: llama2
3
  ---
4
 
5
 
6
- # RepLLaMA-7B-Passage
7
 
8
  [Fine-Tuning LLaMA for Multi-Stage Text Retrieval](https://arxiv.org/abs/2310.08319).
9
  Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, Jimmy Lin, arXiv 2023
10
 
11
- This model is fine-tuned from LLaMA-2-7B using LoRA and the embedding size is **flexible**.
12
 
13
  ## Training Data
14
  The model is fine-tuned on the training split of [MS MARCO Passage Ranking](https://microsoft.github.io/msmarco/Datasets) datasets for 1 epoch.
 
3
  ---
4
 
5
 
6
+ # RepLLaMA-7B-Passage-MRL
7
 
8
  [Fine-Tuning LLaMA for Multi-Stage Text Retrieval](https://arxiv.org/abs/2310.08319).
9
  Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, Jimmy Lin, arXiv 2023
10
 
11
+ This model is fine-tuned from LLaMA-2-7B using LoRA and the embedding size is **flexible**, as Matryoshka Representation Learning is applied during training. The maximum dimensionality of query and passage embedding is 4096.
12
 
13
  ## Training Data
14
  The model is fine-tuned on the training split of [MS MARCO Passage Ranking](https://microsoft.github.io/msmarco/Datasets) datasets for 1 epoch.