izhx commited on
Commit
791c3b7
1 Parent(s): 47afa7f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -2610,7 +2610,7 @@ model-index:
2610
 
2611
  # gte-large-en-v1.5
2612
 
2613
- We introduce `gte-v1.5` series, upgraded `gte` embeddings that support the context length of up to **8192**.
2614
  The models are built upon the `transformer++` encoder [backbone](https://huggingface.co/Alibaba-NLP/new-impl) (BERT + RoPE + GLU).
2615
 
2616
  The `gte-v1.5` series achieve state-of-the-art scores on the MTEB benchmark within the same model size category and prodvide competitive on the LoCo long-context retrieval tests (refer to [Evaluation](#evaluation)).
@@ -2630,7 +2630,7 @@ a SOTA instruction-tuned multi-lingual embedding model that ranked 2nd in MTEB a
2630
 
2631
  | Models | Language | Model Size | Max Seq. Length | Dimension | MTEB-en | LoCo |
2632
  |:-----: | :-----: |:-----: |:-----: |:-----: | :-----: | :-----: |
2633
- |[`gte-Qwen1.5-7B-instruct`](https://huggingface.co/Alibaba-NLP/gte-Qwen1.5-7B-instruct)| multi lingual | 7720 | 32768 | 4096 | 67.34 | 87.57 |
2634
  |[`gte-large-en-v1.5`](https://huggingface.co/Alibaba-NLP/gte-large-en-v1.5) | English | 434 | 8192 | 1024 | 65.39 | 86.71 |
2635
  |[`gte-base-en-v1.5`](https://huggingface.co/Alibaba-NLP/gte-base-en-v1.5) | English | 137 | 8192 | 768 | 64.11 | 87.44 |
2636
 
@@ -2727,7 +2727,7 @@ The gte evaluation setting: `mteb==1.2.0, fp16 auto mix precision, max_length=81
2727
  | [bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5)| 109 | 768 | 512 | 63.55 | 75.53 | 45.77 | 86.55 | 58.86 | 53.25 | 82.4 | 31.07 |
2728
 
2729
 
2730
- ### LOCO
2731
 
2732
  | Model Name | Dimension | Sequence Length | Average (5) | QsmsumRetrieval | SummScreenRetrieval | QasperAbastractRetrieval | QasperTitleRetrieval | GovReportRetrieval |
2733
  |:----:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
 
2610
 
2611
  # gte-large-en-v1.5
2612
 
2613
+ We introduce `gte-v1.5` series, upgraded `gte` embeddings that support the context length of up to **8192**, while further enhancing model performance.
2614
  The models are built upon the `transformer++` encoder [backbone](https://huggingface.co/Alibaba-NLP/new-impl) (BERT + RoPE + GLU).
2615
 
2616
  The `gte-v1.5` series achieve state-of-the-art scores on the MTEB benchmark within the same model size category and prodvide competitive on the LoCo long-context retrieval tests (refer to [Evaluation](#evaluation)).
 
2630
 
2631
  | Models | Language | Model Size | Max Seq. Length | Dimension | MTEB-en | LoCo |
2632
  |:-----: | :-----: |:-----: |:-----: |:-----: | :-----: | :-----: |
2633
+ |[`gte-Qwen1.5-7B-instruct`](https://huggingface.co/Alibaba-NLP/gte-Qwen1.5-7B-instruct)| Multiple | 7720 | 32768 | 4096 | 67.34 | 87.57 |
2634
  |[`gte-large-en-v1.5`](https://huggingface.co/Alibaba-NLP/gte-large-en-v1.5) | English | 434 | 8192 | 1024 | 65.39 | 86.71 |
2635
  |[`gte-base-en-v1.5`](https://huggingface.co/Alibaba-NLP/gte-base-en-v1.5) | English | 137 | 8192 | 768 | 64.11 | 87.44 |
2636
 
 
2727
  | [bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5)| 109 | 768 | 512 | 63.55 | 75.53 | 45.77 | 86.55 | 58.86 | 53.25 | 82.4 | 31.07 |
2728
 
2729
 
2730
+ ### LoCo
2731
 
2732
  | Model Name | Dimension | Sequence Length | Average (5) | QsmsumRetrieval | SummScreenRetrieval | QasperAbastractRetrieval | QasperTitleRetrieval | GovReportRetrieval |
2733
  |:----:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|