thenlper commited on
Commit
7ee2ea3
1 Parent(s): 586352f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +32 -3
README.md CHANGED
@@ -1,3 +1,32 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - mteb
4
+ - sentence-transformers
5
+ - transformers
6
+ - multilingual
7
+ - sentence-similarity
8
+ license: apache-2.0
9
+ ---
10
+
11
+ ## gte-multilingual-base
12
+
13
+ The **gte-multilingual-base** model is the latest in the [GTE](https://huggingface.co/collections/Alibaba-NLP/gte-models-6680f0b13f885cb431e6d469) (General Text Embedding) family of models, featuring several key attributes:
14
+
15
+ - **High Performance**: Achieves state-of-the-art (SOTA) results in multilingual retrieval tasks and multi-task representation model evaluations when compared to models of similar size.
16
+ - **Training Architecture**: Trained using an encoder-only transformers architecture, resulting in a smaller model size. Unlike previous models based on decode-only LLM architecture (e.g., gte-qwen2-1.5b-instruct), this model has lower hardware requirements for inference, offering a 10x increase in inference speed.
17
+ - **Long Context**: Supports text lengths up to **8192** tokens.
18
+ - **Multilingual Capability**: Supports over **70** languages.
19
+ - **Elastic Dense Embedding**: Support elastic output dense representation while maintaining the effectiveness of downstream tasks, which significantly reduces storage costs and improves execution efficiency.
20
+ - **Sparse Vectors**: In addition to dense representations, it can also generate sparse vectors.
21
+
22
+ ## Model Information
23
+ - Model Size: 304M
24
+ - Embedding Dimension: 768
25
+ - Max Input Tokens: 8192
26
+
27
+ ## Requirements
28
+ ```
29
+ transformers>=4.39.2
30
+ flash_attn>=2.5.6
31
+ ```
32
+ ## Usage