thenlper commited on
Commit
5868460
1 Parent(s): 8345a94

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -2
README.md CHANGED
@@ -2606,7 +2606,7 @@ license: mit
2606
 
2607
  # gte-large
2608
 
2609
- Gegeral Text Embeddings (GTE) model.
2610
 
2611
  The GTE models are trained by Alibaba DAMO Academy. They are mainly based on the BERT framework and currently offer three different sizes of models, including [GTE-large](https://huggingface.co/thenlper/gte-large), [GTE-base](https://huggingface.co/thenlper/gte-base), and [GTE-small](https://huggingface.co/thenlper/gte-small). The GTE models are trained on a large-scale corpus of relevance text pairs, covering a wide range of domains and scenarios. This enables the GTE models to be applied to various downstream tasks of text embeddings, including **information retrieval**, **semantic textual similarity**, **text reranking**, etc.
2612
 
@@ -2684,4 +2684,17 @@ print(cos_sim(embeddings[0], embeddings[1]))
2684
 
2685
  ### Limitation
2686
 
2687
- This model exclusively caters to English texts, and any lengthy texts will be truncated to a maximum of 512 tokens.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2606
 
2607
  # gte-large
2608
 
2609
+ Gegeral Text Embeddings (GTE) model. [Towards General Text Embeddings with Multi-stage Contrastive Learning](https://arxiv.org/abs/2308.03281)
2610
 
2611
  The GTE models are trained by Alibaba DAMO Academy. They are mainly based on the BERT framework and currently offer three different sizes of models, including [GTE-large](https://huggingface.co/thenlper/gte-large), [GTE-base](https://huggingface.co/thenlper/gte-base), and [GTE-small](https://huggingface.co/thenlper/gte-small). The GTE models are trained on a large-scale corpus of relevance text pairs, covering a wide range of domains and scenarios. This enables the GTE models to be applied to various downstream tasks of text embeddings, including **information retrieval**, **semantic textual similarity**, **text reranking**, etc.
2612
 
 
2684
 
2685
  ### Limitation
2686
 
2687
+ This model exclusively caters to English texts, and any lengthy texts will be truncated to a maximum of 512 tokens.
2688
+
2689
+ ### Citation
2690
+
2691
+ If you find our paper or models helpful, please consider citing them as follows:
2692
+
2693
+ @misc{li2023general,
2694
+ title={Towards General Text Embeddings with Multi-stage Contrastive Learning},
2695
+ author={Zehan Li and Xin Zhang and Yanzhao Zhang and Dingkun Long and Pengjun Xie and Meishan Zhang},
2696
+ year={2023},
2697
+ eprint={2308.03281},
2698
+ archivePrefix={arXiv},
2699
+ primaryClass={cs.CL}
2700
+ }