Shitao commited on
Commit
53684d8
1 Parent(s): ce2a0d1

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +20 -22
README.md CHANGED
@@ -1,13 +1,13 @@
1
  ---
2
  license: mit
3
  language:
4
- - zh
5
  - en
 
6
 
7
  tags:
8
  - mteb
9
  model-index:
10
- - name: bge-reranker-large
11
  results:
12
  - task:
13
  type: Reranking
@@ -19,9 +19,9 @@ model-index:
19
  revision: None
20
  metrics:
21
  - type: map
22
- value: 82.13813829648727
23
  - type: mrr
24
- value: 84.92349206349207
25
  - task:
26
  type: Reranking
27
  dataset:
@@ -32,9 +32,9 @@ model-index:
32
  revision: None
33
  metrics:
34
  - type: map
35
- value: 84.19313276771856
36
  - type: mrr
37
- value: 86.96876984126984
38
  - task:
39
  type: Reranking
40
  dataset:
@@ -45,9 +45,9 @@ model-index:
45
  revision: None
46
  metrics:
47
  - type: map
48
- value: 37.16533876035345
49
  - type: mrr
50
- value: 36.60039682539682
51
  - task:
52
  type: Reranking
53
  dataset:
@@ -58,9 +58,9 @@ model-index:
58
  revision: None
59
  metrics:
60
  - type: map
61
- value: 67.60068968300665
62
  - type: mrr
63
- value: 77.68363585560605
64
  ---
65
 
66
 
@@ -74,31 +74,33 @@ model-index:
74
  <a href=#usage>Usage</a> |
75
  <a href="#evaluation">Evaluation</a> |
76
  <a href="#train">Train</a> |
77
- <a href="#contact">Contact</a> |
78
  <a href="#citation">Citation</a> |
79
  <a href="#license">License</a>
80
  <p>
81
  </h4>
82
 
83
- More details please refer to our Github: [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding).
84
 
85
 
86
  [English](README.md) | [中文](https://github.com/FlagOpen/FlagEmbedding/blob/master/README_zh.md)
87
 
 
88
  FlagEmbedding focuses on retrieval-augmented LLMs, consisting of the following projects currently:
89
 
90
  - **Long-Context LLM**: [Activation Beacon](https://github.com/FlagOpen/FlagEmbedding/tree/master/Long_LLM/activation_beacon)
91
  - **Fine-tuning of LM** : [LM-Cocktail](https://github.com/FlagOpen/FlagEmbedding/tree/master/LM_Cocktail)
92
- - **Dense Retrieval**: [BGE-M3](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3), [LLM Embedder](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/llm_embedder), [BGE Embedding](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/baai_general_embedding)
93
- - **Reranker Model**: [BGE Reranker](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/reranker)
94
  - **Benchmark**: [C-MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB)
95
 
96
  ## News
 
 
97
  - 1/30/2024: Release **BGE-M3**, a new member to BGE model series! M3 stands for **M**ulti-linguality (100+ languages), **M**ulti-granularities (input length up to 8192), **M**ulti-Functionality (unification of dense, lexical, multi-vec/colbert retrieval).
98
  It is the first embedding model which supports all three retrieval methods, achieving new SOTA on multi-lingual (MIRACL) and cross-lingual (MKQA) benchmarks.
99
  [Technical Report](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/BGE_M3/BGE_M3.pdf) and [Code](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3). :fire:
100
  - 1/9/2024: Release [Activation-Beacon](https://github.com/FlagOpen/FlagEmbedding/tree/master/Long_LLM/activation_beacon), an effective, efficient, compatible, and low-cost (training) method to extend the context length of LLM. [Technical Report](https://arxiv.org/abs/2401.03462) :fire:
101
- - 12/24/2023: Release **LLaRA**, a LLaMA-7B based dense retriever, leading to state-of-the-art performances on MS MARCO and BEIR. Model and code will be open-sourced. Please stay tuned. [Technical Report](https://arxiv.org/abs/2312.15503) :fire:
102
  - 11/23/2023: Release [LM-Cocktail](https://github.com/FlagOpen/FlagEmbedding/tree/master/LM_Cocktail), a method to maintain general capabilities during fine-tuning by merging multiple language models. [Technical Report](https://arxiv.org/abs/2311.13534) :fire:
103
  - 10/12/2023: Release [LLM-Embedder](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/llm_embedder), a unified embedding model to support diverse retrieval augmentation needs for LLMs. [Technical Report](https://arxiv.org/pdf/2310.07554.pdf)
104
  - 09/15/2023: The [technical report](https://arxiv.org/pdf/2309.07597.pdf) of BGE has been released
@@ -164,7 +166,8 @@ Following this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/e
164
  Some suggestions:
165
  - Mine hard negatives following this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune#hard-negatives), which can improve the retrieval performance.
166
  - If you pre-train bge on your data, the pre-trained model cannot be directly used to calculate similarity, and it must be fine-tuned with contrastive learning before computing similarity.
167
- - If the accuracy of the fine-tuned model is still not high, it is recommended to use/fine-tune the cross-encoder model (bge-reranker) to re-rank top-k results. Hard negatives also are needed to fine-tune reranker.
 
168
 
169
 
170
  </details>
@@ -503,10 +506,6 @@ The data format is the same as embedding model, so you can fine-tune it easily f
503
  More details please refer to [./FlagEmbedding/reranker/README.md](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/reranker)
504
 
505
 
506
- ## Contact
507
- If you have any question or suggestion related to this project, feel free to open an issue or pull request.
508
- You also can email Shitao Xiao(stxiao@baai.ac.cn) and Zheng Liu(liuzheng@baai.ac.cn).
509
-
510
 
511
  ## Citation
512
 
@@ -524,5 +523,4 @@ If you find this repository useful, please consider giving a star :star: and cit
524
  ```
525
 
526
  ## License
527
- FlagEmbedding is licensed under the [MIT License](https://github.com/FlagOpen/FlagEmbedding/blob/master/LICENSE). The released models can be used for commercial purposes free of charge.
528
-
 
1
  ---
2
  license: mit
3
  language:
 
4
  - en
5
+ - zh
6
 
7
  tags:
8
  - mteb
9
  model-index:
10
+ - name: bge-reranker-base
11
  results:
12
  - task:
13
  type: Reranking
 
19
  revision: None
20
  metrics:
21
  - type: map
22
+ value: 81.27206722525007
23
  - type: mrr
24
+ value: 84.14238095238095
25
  - task:
26
  type: Reranking
27
  dataset:
 
32
  revision: None
33
  metrics:
34
  - type: map
35
+ value: 84.10369934291236
36
  - type: mrr
37
+ value: 86.79376984126984
38
  - task:
39
  type: Reranking
40
  dataset:
 
45
  revision: None
46
  metrics:
47
  - type: map
48
+ value: 35.4600511272538
49
  - type: mrr
50
+ value: 34.602380952380946
51
  - task:
52
  type: Reranking
53
  dataset:
 
58
  revision: None
59
  metrics:
60
  - type: map
61
+ value: 67.27728847727172
62
  - type: mrr
63
+ value: 77.1315192743764
64
  ---
65
 
66
 
 
74
  <a href=#usage>Usage</a> |
75
  <a href="#evaluation">Evaluation</a> |
76
  <a href="#train">Train</a> |
 
77
  <a href="#citation">Citation</a> |
78
  <a href="#license">License</a>
79
  <p>
80
  </h4>
81
 
82
+ **More details please refer to our Github: [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding).**
83
 
84
 
85
  [English](README.md) | [中文](https://github.com/FlagOpen/FlagEmbedding/blob/master/README_zh.md)
86
 
87
+
88
  FlagEmbedding focuses on retrieval-augmented LLMs, consisting of the following projects currently:
89
 
90
  - **Long-Context LLM**: [Activation Beacon](https://github.com/FlagOpen/FlagEmbedding/tree/master/Long_LLM/activation_beacon)
91
  - **Fine-tuning of LM** : [LM-Cocktail](https://github.com/FlagOpen/FlagEmbedding/tree/master/LM_Cocktail)
92
+ - **Embedding Model**: [Visualized-BGE](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/visual), [BGE-M3](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3), [LLM Embedder](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/llm_embedder), [BGE Embedding](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/baai_general_embedding)
93
+ - **Reranker Model**: [llm rerankers](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/llm_reranker), [BGE Reranker](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/reranker)
94
  - **Benchmark**: [C-MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB)
95
 
96
  ## News
97
+ - 3/18/2024: Release new [rerankers](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/llm_reranker), built upon powerful M3 and LLM (GEMMA and MiniCPM, not so large actually) backbones, supporitng multi-lingual processing and larger inputs, massive improvements of ranking performances on BEIR, C-MTEB/Retrieval, MIRACL, LlamaIndex Evaluation.
98
+ - 3/18/2024: Release [Visualized-BGE](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/visual), equipping BGE with visual capabilities. Visualized-BGE can be utilized to generate embeddings for hybrid image-text data.
99
  - 1/30/2024: Release **BGE-M3**, a new member to BGE model series! M3 stands for **M**ulti-linguality (100+ languages), **M**ulti-granularities (input length up to 8192), **M**ulti-Functionality (unification of dense, lexical, multi-vec/colbert retrieval).
100
  It is the first embedding model which supports all three retrieval methods, achieving new SOTA on multi-lingual (MIRACL) and cross-lingual (MKQA) benchmarks.
101
  [Technical Report](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/BGE_M3/BGE_M3.pdf) and [Code](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3). :fire:
102
  - 1/9/2024: Release [Activation-Beacon](https://github.com/FlagOpen/FlagEmbedding/tree/master/Long_LLM/activation_beacon), an effective, efficient, compatible, and low-cost (training) method to extend the context length of LLM. [Technical Report](https://arxiv.org/abs/2401.03462) :fire:
103
+ - 12/24/2023: Release **LLaRA**, a LLaMA-7B based dense retriever, leading to state-of-the-art performances on MS MARCO and BEIR. Model and code will be open-sourced. Please stay tuned. [Technical Report](https://arxiv.org/abs/2312.15503)
104
  - 11/23/2023: Release [LM-Cocktail](https://github.com/FlagOpen/FlagEmbedding/tree/master/LM_Cocktail), a method to maintain general capabilities during fine-tuning by merging multiple language models. [Technical Report](https://arxiv.org/abs/2311.13534) :fire:
105
  - 10/12/2023: Release [LLM-Embedder](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/llm_embedder), a unified embedding model to support diverse retrieval augmentation needs for LLMs. [Technical Report](https://arxiv.org/pdf/2310.07554.pdf)
106
  - 09/15/2023: The [technical report](https://arxiv.org/pdf/2309.07597.pdf) of BGE has been released
 
166
  Some suggestions:
167
  - Mine hard negatives following this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune#hard-negatives), which can improve the retrieval performance.
168
  - If you pre-train bge on your data, the pre-trained model cannot be directly used to calculate similarity, and it must be fine-tuned with contrastive learning before computing similarity.
169
+ - If the accuracy of the fine-tuned model is still not high, it is recommended to use/fine-tune the cross-encoder model (bge-reranker) to re-rank top-k results.
170
+ Hard negatives also are needed to fine-tune reranker. Refer to this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) for the fine-tuning for reranker
171
 
172
 
173
  </details>
 
506
  More details please refer to [./FlagEmbedding/reranker/README.md](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/reranker)
507
 
508
 
 
 
 
 
509
 
510
  ## Citation
511
 
 
523
  ```
524
 
525
  ## License
526
+ FlagEmbedding is licensed under the [MIT License](https://github.com/FlagOpen/FlagEmbedding/blob/master/LICENSE). The released models can be used for commercial purposes free of charge.