intfloat commited on
Commit
ab10c1a
1 Parent(s): bbc53bb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -10
README.md CHANGED
@@ -5951,10 +5951,7 @@ license: mit
5951
 
5952
  ## Multilingual-E5-large
5953
 
5954
- [Text Embeddings by Weakly-Supervised Contrastive Pre-training](https://arxiv.org/pdf/2212.03533.pdf).
5955
- Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, Furu Wei, arXiv 2022
5956
-
5957
- [Multilingual E5 Text Embeddings: A Technical Report](https://arxiv.org/abs/2402.05672).
5958
  Liang Wang, Nan Yang, Xiaolong Huang, Linjun Yang, Rangan Majumder, Furu Wei, arXiv 2024
5959
 
5960
  This model has 24 layers and the embedding size is 1024.
@@ -6042,7 +6039,7 @@ but low-resource languages may see performance degradation.
6042
 
6043
  For all labeled datasets, we only use its training set for fine-tuning.
6044
 
6045
- For other training details, please refer to our paper at [https://arxiv.org/pdf/2212.03533.pdf](https://arxiv.org/pdf/2212.03533.pdf).
6046
 
6047
  ## Benchmark Results on [Mr. TyDi](https://arxiv.org/abs/2108.08787)
6048
 
@@ -6112,11 +6109,11 @@ so this should not be an issue.
6112
  If you find our paper or models helpful, please consider cite as follows:
6113
 
6114
  ```
6115
- @article{wang2022text,
6116
- title={Text Embeddings by Weakly-Supervised Contrastive Pre-training},
6117
- author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Jiao, Binxing and Yang, Linjun and Jiang, Daxin and Majumder, Rangan and Wei, Furu},
6118
- journal={arXiv preprint arXiv:2212.03533},
6119
- year={2022}
6120
  }
6121
  ```
6122
 
 
5951
 
5952
  ## Multilingual-E5-large
5953
 
5954
+ [Multilingual E5 Text Embeddings: A Technical Report](https://arxiv.org/pdf/2402.05672).
 
 
 
5955
  Liang Wang, Nan Yang, Xiaolong Huang, Linjun Yang, Rangan Majumder, Furu Wei, arXiv 2024
5956
 
5957
  This model has 24 layers and the embedding size is 1024.
 
6039
 
6040
  For all labeled datasets, we only use its training set for fine-tuning.
6041
 
6042
+ For other training details, please refer to our paper at [https://arxiv.org/pdf/2402.05672](https://arxiv.org/pdf/2402.05672).
6043
 
6044
  ## Benchmark Results on [Mr. TyDi](https://arxiv.org/abs/2108.08787)
6045
 
 
6109
  If you find our paper or models helpful, please consider cite as follows:
6110
 
6111
  ```
6112
+ @article{wang2024multilingual,
6113
+ title={Multilingual E5 Text Embeddings: A Technical Report},
6114
+ author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Yang, Linjun and Majumder, Rangan and Wei, Furu},
6115
+ journal={arXiv preprint arXiv:2402.05672},
6116
+ year={2024}
6117
  }
6118
  ```
6119