intfloat commited on
Commit
0a68dcd
1 Parent(s): 97e13e8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -10
README.md CHANGED
@@ -5950,10 +5950,7 @@ license: mit
5950
 
5951
  ## Multilingual-E5-small
5952
 
5953
- [Text Embeddings by Weakly-Supervised Contrastive Pre-training](https://arxiv.org/pdf/2212.03533.pdf).
5954
- Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, Furu Wei, arXiv 2022
5955
-
5956
- [Multilingual E5 Text Embeddings: A Technical Report](https://arxiv.org/abs/2402.05672).
5957
  Liang Wang, Nan Yang, Xiaolong Huang, Linjun Yang, Rangan Majumder, Furu Wei, arXiv 2024
5958
 
5959
  This model has 12 layers and the embedding size is 384.
@@ -6041,7 +6038,7 @@ but low-resource languages may see performance degradation.
6041
 
6042
  For all labeled datasets, we only use its training set for fine-tuning.
6043
 
6044
- For other training details, please refer to our paper at [https://arxiv.org/pdf/2212.03533.pdf](https://arxiv.org/pdf/2212.03533.pdf).
6045
 
6046
  ## Benchmark Results on [Mr. TyDi](https://arxiv.org/abs/2108.08787)
6047
 
@@ -6111,11 +6108,11 @@ so this should not be an issue.
6111
  If you find our paper or models helpful, please consider cite as follows:
6112
 
6113
  ```
6114
- @article{wang2022text,
6115
- title={Text Embeddings by Weakly-Supervised Contrastive Pre-training},
6116
- author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Jiao, Binxing and Yang, Linjun and Jiang, Daxin and Majumder, Rangan and Wei, Furu},
6117
- journal={arXiv preprint arXiv:2212.03533},
6118
- year={2022}
6119
  }
6120
  ```
6121
 
 
5950
 
5951
  ## Multilingual-E5-small
5952
 
5953
+ [Multilingual E5 Text Embeddings: A Technical Report](https://arxiv.org/pdf/2402.05672).
 
 
 
5954
  Liang Wang, Nan Yang, Xiaolong Huang, Linjun Yang, Rangan Majumder, Furu Wei, arXiv 2024
5955
 
5956
  This model has 12 layers and the embedding size is 384.
 
6038
 
6039
  For all labeled datasets, we only use its training set for fine-tuning.
6040
 
6041
+ For other training details, please refer to our paper at [https://arxiv.org/pdf/2402.05672](https://arxiv.org/pdf/2402.05672).
6042
 
6043
  ## Benchmark Results on [Mr. TyDi](https://arxiv.org/abs/2108.08787)
6044
 
 
6108
  If you find our paper or models helpful, please consider cite as follows:
6109
 
6110
  ```
6111
+ @article{wang2024multilingual,
6112
+ title={Multilingual E5 Text Embeddings: A Technical Report},
6113
+ author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Yang, Linjun and Majumder, Rangan and Wei, Furu},
6114
+ journal={arXiv preprint arXiv:2402.05672},
6115
+ year={2024}
6116
  }
6117
  ```
6118