thenlper commited on
Commit
2b1a85a
1 Parent(s): 5b7a05d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -1
README.md CHANGED
@@ -2606,7 +2606,7 @@ license: mit
2606
 
2607
  # gte-base
2608
 
2609
- Gegeral Text Embeddings (GTE) model.
2610
 
2611
  The GTE models are trained by Alibaba DAMO Academy. They are mainly based on the BERT framework and currently offer three different sizes of models, including [GTE-large](https://huggingface.co/thenlper/gte-large), [GTE-base](https://huggingface.co/thenlper/gte-base), and [GTE-small](https://huggingface.co/thenlper/gte-small). The GTE models are trained on a large-scale corpus of relevance text pairs, covering a wide range of domains and scenarios. This enables the GTE models to be applied to various downstream tasks of text embeddings, including **information retrieval**, **semantic textual similarity**, **text reranking**, etc.
2612
 
@@ -2684,3 +2684,18 @@ print(cos_sim(embeddings[0], embeddings[1]))
2684
  ### Limitation
2685
 
2686
  This model exclusively caters to English texts, and any lengthy texts will be truncated to a maximum of 512 tokens.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2606
 
2607
  # gte-base
2608
 
2609
+ Gegeral Text Embeddings (GTE) model. [Towards General Text Embeddings with Multi-stage Contrastive Learning](https://arxiv.org/abs/2308.03281)
2610
 
2611
  The GTE models are trained by Alibaba DAMO Academy. They are mainly based on the BERT framework and currently offer three different sizes of models, including [GTE-large](https://huggingface.co/thenlper/gte-large), [GTE-base](https://huggingface.co/thenlper/gte-base), and [GTE-small](https://huggingface.co/thenlper/gte-small). The GTE models are trained on a large-scale corpus of relevance text pairs, covering a wide range of domains and scenarios. This enables the GTE models to be applied to various downstream tasks of text embeddings, including **information retrieval**, **semantic textual similarity**, **text reranking**, etc.
2612
 
 
2684
  ### Limitation
2685
 
2686
  This model exclusively caters to English texts, and any lengthy texts will be truncated to a maximum of 512 tokens.
2687
+
2688
+ ### Citation
2689
+
2690
+ If you find our paper or models helpful, please consider citing them as follows:
2691
+
2692
+ ```
2693
+ @misc{li2023general,
2694
+ title={Towards General Text Embeddings with Multi-stage Contrastive Learning},
2695
+ author={Zehan Li and Xin Zhang and Yanzhao Zhang and Dingkun Long and Pengjun Xie and Meishan Zhang},
2696
+ year={2023},
2697
+ eprint={2308.03281},
2698
+ archivePrefix={arXiv},
2699
+ primaryClass={cs.CL}
2700
+ }
2701
+ ```