intfloat commited on
Commit
d13f1b2
1 Parent(s): 87d6695

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -10
README.md CHANGED
@@ -6778,10 +6778,7 @@ license: mit
6778
 
6779
  ## Multilingual-E5-base
6780
 
6781
- [Text Embeddings by Weakly-Supervised Contrastive Pre-training](https://arxiv.org/pdf/2212.03533.pdf).
6782
- Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, Furu Wei, arXiv 2022
6783
-
6784
- [Multilingual E5 Text Embeddings: A Technical Report](https://arxiv.org/abs/2402.05672).
6785
  Liang Wang, Nan Yang, Xiaolong Huang, Linjun Yang, Rangan Majumder, Furu Wei, arXiv 2024
6786
 
6787
  This model has 12 layers and the embedding size is 768.
@@ -6869,7 +6866,7 @@ but low-resource languages may see performance degradation.
6869
 
6870
  For all labeled datasets, we only use its training set for fine-tuning.
6871
 
6872
- For other training details, please refer to our paper at [https://arxiv.org/pdf/2212.03533.pdf](https://arxiv.org/pdf/2212.03533.pdf).
6873
 
6874
  ## Benchmark Results on [Mr. TyDi](https://arxiv.org/abs/2108.08787)
6875
 
@@ -6939,11 +6936,11 @@ so this should not be an issue.
6939
  If you find our paper or models helpful, please consider cite as follows:
6940
 
6941
  ```
6942
- @article{wang2022text,
6943
- title={Text Embeddings by Weakly-Supervised Contrastive Pre-training},
6944
- author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Jiao, Binxing and Yang, Linjun and Jiang, Daxin and Majumder, Rangan and Wei, Furu},
6945
- journal={arXiv preprint arXiv:2212.03533},
6946
- year={2022}
6947
  }
6948
  ```
6949
 
 
6778
 
6779
  ## Multilingual-E5-base
6780
 
6781
+ [Multilingual E5 Text Embeddings: A Technical Report](https://arxiv.org/pdf/2402.05672).
 
 
 
6782
  Liang Wang, Nan Yang, Xiaolong Huang, Linjun Yang, Rangan Majumder, Furu Wei, arXiv 2024
6783
 
6784
  This model has 12 layers and the embedding size is 768.
 
6866
 
6867
  For all labeled datasets, we only use its training set for fine-tuning.
6868
 
6869
+ For other training details, please refer to our paper at [https://arxiv.org/pdf/2402.05672](https://arxiv.org/pdf/2402.05672).
6870
 
6871
  ## Benchmark Results on [Mr. TyDi](https://arxiv.org/abs/2108.08787)
6872
 
 
6936
  If you find our paper or models helpful, please consider cite as follows:
6937
 
6938
  ```
6939
+ @article{wang2024multilingual,
6940
+ title={Multilingual E5 Text Embeddings: A Technical Report},
6941
+ author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Yang, Linjun and Majumder, Rangan and Wei, Furu},
6942
+ journal={arXiv preprint arXiv:2402.05672},
6943
+ year={2024}
6944
  }
6945
  ```
6946