Shitao commited on
Commit
166305c
1 Parent(s): 7fc4958

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +61 -18
README.md CHANGED
@@ -9,7 +9,8 @@ license: mit
9
 
10
  For more details please refer to our github repo: https://github.com/FlagOpen/FlagEmbedding
11
 
12
- # BGE-M3
 
13
  In this project, we introduce BGE-M3, which is distinguished for its versatility in Multi-Functionality, Multi-Linguality, and Multi-Granularity.
14
  - Multi-Functionality: It can simultaneously perform the three common retrieval functionalities of embedding model: dense retrieval, multi-vector retrieval, and sparse retrieval.
15
  - Multi-Linguality: It can support more than 100 working languages.
@@ -25,15 +26,29 @@ This allows you to obtain token weights (similar to the BM25) without any additi
25
  Utilizing the re-ranking model (e.g., [bge-reranker](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/reranker), [cohere-reranker](https://txt.cohere.com/rerank/)) after retrieval can further filter the selected text.
26
 
27
 
28
- ## Model Specs
 
 
 
 
 
 
 
29
 
30
- | Model Name | Dimension | Sequence Length |
31
- |:----:|:---:|:---:|
32
- | [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) | 1024 | 8192 |
33
- | [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | 1024 | 512 |
34
- | [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | 768 | 512 |
35
- | [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | 384 | 512 |
 
 
36
 
 
 
 
 
 
37
 
38
 
39
  ## FAQ
@@ -44,7 +59,18 @@ Utilizing the re-ranking model (e.g., [bge-reranker](https://github.com/FlagOpen
44
  - Sparse retrieval (lexical matching): a vector of size equal to the vocabulary, with the majority of positions set to zero, calculating a weight only for tokens present in the text. e.g., BM25, [unicoil](https://arxiv.org/pdf/2106.14807.pdf), and [splade](https://arxiv.org/abs/2107.05720)
45
  - Multi-vector retrieval: use multiple vectors to represent a text, e.g., [ColBERT](https://arxiv.org/abs/2004.12832).
46
 
47
- **2. How to use BGE-M3 in other projects?**
 
 
 
 
 
 
 
 
 
 
 
48
 
49
  For embedding retrieval, you can employ the BGE-M3 model using the same approach as BGE.
50
  The only difference is that the BGE-M3 model no longer requires adding instructions to the queries.
@@ -52,7 +78,12 @@ For sparse retrieval methods, most open-source libraries currently do not suppor
52
  Contributions from the community are welcome.
53
 
54
 
55
- **3. How to fine-tune bge-M3 model?**
 
 
 
 
 
56
 
57
  You can follow the common in this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune)
58
  to fine-tune the dense embedding.
@@ -193,6 +224,13 @@ print(model.compute_score(sentence_pairs,
193
  - Long Document Retrieval
194
  - MLDR:
195
  ![avatar](./imgs/long.jpg)
 
 
 
 
 
 
 
196
  - NarritiveQA:
197
  ![avatar](./imgs/nqa.jpg)
198
 
@@ -205,24 +243,29 @@ The small-batch strategy is simple but effective, which also can used to fine-tu
205
  - MCLS: A simple method to improve the performance on long text without fine-tuning.
206
  If you have no enough resource to fine-tuning model with long text, the method is useful.
207
 
208
- Refer to our [report](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/BGE_M3/BGE_M3.pdf) for more details.
209
 
210
  **The fine-tuning codes and datasets will be open-sourced in the near future.**
211
 
212
- ## Models
213
-
214
- We release two versions:
215
- - BAAI/bge-m3-unsupervised: the model after contrastive learning in a large-scale dataset
216
- - BAAI/bge-m3: the final model fine-tuned from BAAI/bge-m3-unsupervised
217
 
218
  ## Acknowledgement
219
 
220
- Thanks the authors of open-sourced datasets, including Miracl, MKQA, NarritiveQA, etc.
 
 
 
221
 
222
  ## Citation
223
 
224
  If you find this repository useful, please consider giving a star :star: and citation
225
 
226
  ```
227
-
 
 
 
 
 
 
 
228
  ```
 
9
 
10
  For more details please refer to our github repo: https://github.com/FlagOpen/FlagEmbedding
11
 
12
+ # BGE-M3 ([paper](https://arxiv.org/pdf/2402.03216.pdf), [code](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3))
13
+
14
  In this project, we introduce BGE-M3, which is distinguished for its versatility in Multi-Functionality, Multi-Linguality, and Multi-Granularity.
15
  - Multi-Functionality: It can simultaneously perform the three common retrieval functionalities of embedding model: dense retrieval, multi-vector retrieval, and sparse retrieval.
16
  - Multi-Linguality: It can support more than 100 working languages.
 
26
  Utilizing the re-ranking model (e.g., [bge-reranker](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/reranker), [cohere-reranker](https://txt.cohere.com/rerank/)) after retrieval can further filter the selected text.
27
 
28
 
29
+ ## News:
30
+ - 2/6/2024: We release the [MLDR](https://huggingface.co/datasets/Shitao/MLDR) (a long document retrieval dataset covering 13 languages) and [evaluation pipeline](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB/MLDR).
31
+ - 2/1/2024: **Thanks for the excellent tool from Vespa.** You can easily use multiple modes of BGE-M3 following this [notebook](https://github.com/vespa-engine/pyvespa/blob/master/docs/sphinx/source/examples/mother-of-all-embedding-models-cloud.ipynb)
32
+
33
+
34
+ ## Specs
35
+
36
+ - Model
37
 
38
+ | Model Name | Dimension | Sequence Length | Introduction |
39
+ |:----:|:---:|:---:|:---:|
40
+ | [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) | 1024 | 8192 | multilingual; unified fine-tuning (dense, sparse, and colbert) from bge-m3-unsupervised|
41
+ | [BAAI/bge-m3-unsupervised](https://huggingface.co/BAAI/bge-m3-unsupervised) | 1024 | 8192 | multilingual; contrastive learning from bge-m3-retromae |
42
+ | [BAAI/bge-m3-retromae](https://huggingface.co/BAAI/bge-m3-retromae) | -- | 8192 | multilingual; extend the max_length of [xlm-roberta](https://huggingface.co/FacebookAI/xlm-roberta-large) to 8192 and further pretrained via [retromae](https://github.com/staoxiao/RetroMAE)|
43
+ | [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | 1024 | 512 | English model |
44
+ | [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | 768 | 512 | English model |
45
+ | [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | 384 | 512 | English model |
46
 
47
+ - Data
48
+
49
+ | Dataset | Introduction |
50
+ |:----:|:---:|
51
+ | [MLDR](https://huggingface.co/datasets/Shitao/MLDR) | Docuemtn Retrieval Dataset, covering 13 languages|
52
 
53
 
54
  ## FAQ
 
59
  - Sparse retrieval (lexical matching): a vector of size equal to the vocabulary, with the majority of positions set to zero, calculating a weight only for tokens present in the text. e.g., BM25, [unicoil](https://arxiv.org/pdf/2106.14807.pdf), and [splade](https://arxiv.org/abs/2107.05720)
60
  - Multi-vector retrieval: use multiple vectors to represent a text, e.g., [ColBERT](https://arxiv.org/abs/2004.12832).
61
 
62
+ **2. Comparison with BGE-v1.5 and other monolingual models**
63
+
64
+ BGE-M3 is a multilingual model, and its ability in monolingual embedding retrieval may not surpass models specifically designed for single languages.
65
+ However, we still recommend trying BGE-M3 because of its versatility (support for multiple languages and long texts).
66
+ Moreover, it can simultaneously generate multiple representations, and using them together can enhance accuracy and generalization,
67
+ unlike most existing models that can only perform dense retrieval.
68
+
69
+ In the open-source community, there are many excellent models (e.g., jina-embedding, colbert, e5, etc),
70
+ and users can choose a model that suits their specific needs based on practical considerations,
71
+ such as whether to require multilingual or cross-language support, and whether to process long texts.
72
+
73
+ **3. How to use BGE-M3 in other projects?**
74
 
75
  For embedding retrieval, you can employ the BGE-M3 model using the same approach as BGE.
76
  The only difference is that the BGE-M3 model no longer requires adding instructions to the queries.
 
78
  Contributions from the community are welcome.
79
 
80
 
81
+ In our experiments, we use [Pyserini](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB/MLDR#hybrid-retrieval-dense--sparse) and Faiss to do hybrid retrieval.
82
+ **Now you can ou can try the hybrid mode of BGE-M3 in [Vespa](https://github.com/vespa-engine/pyvespa/blob/master/docs/sphinx/source/examples/mother-of-all-embedding-models-cloud.ipynb
83
+ ). Thanks @jobergum.**
84
+
85
+
86
+ **4. How to fine-tune bge-M3 model?**
87
 
88
  You can follow the common in this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune)
89
  to fine-tune the dense embedding.
 
224
  - Long Document Retrieval
225
  - MLDR:
226
  ![avatar](./imgs/long.jpg)
227
+ Please note that [MLDR](https://huggingface.co/datasets/Shitao/MLDR) is a document retrieval dataset we constructed via LLM,
228
+ covering 13 languages, including test set, validation set, and training set.
229
+ We utilized the training set from MLDR to enhance the model's long document retrieval capabilities.
230
+ Therefore, comparing baselines with `Dense w.o.long`(fine-tuning without long document dataset) is more equitable.
231
+ Additionally, this long document retrieval dataset will be open-sourced to address the current lack of open-source multilingual long text retrieval datasets.
232
+ We believe that this data will be helpful for the open-source community in training document retrieval models.
233
+
234
  - NarritiveQA:
235
  ![avatar](./imgs/nqa.jpg)
236
 
 
243
  - MCLS: A simple method to improve the performance on long text without fine-tuning.
244
  If you have no enough resource to fine-tuning model with long text, the method is useful.
245
 
246
+ Refer to our [report](https://arxiv.org/pdf/2402.03216.pdf) for more details.
247
 
248
  **The fine-tuning codes and datasets will be open-sourced in the near future.**
249
 
 
 
 
 
 
250
 
251
  ## Acknowledgement
252
 
253
+ Thanks the authors of open-sourced datasets, including Miracl, MKQA, NarritiveQA, etc.
254
+ Thanks the open-sourced libraries like [Tevatron](https://github.com/texttron/tevatron), [pyserial](https://github.com/pyserial/pyserial).
255
+
256
+
257
 
258
  ## Citation
259
 
260
  If you find this repository useful, please consider giving a star :star: and citation
261
 
262
  ```
263
+ @misc{bge-m3,
264
+ title={BGE M3-Embedding: Multi-Lingual, Multi-Functionality, Multi-Granularity Text Embeddings Through Self-Knowledge Distillation},
265
+ author={Jianlv Chen and Shitao Xiao and Peitian Zhang and Kun Luo and Defu Lian and Zheng Liu},
266
+ year={2024},
267
+ eprint={2402.03216},
268
+ archivePrefix={arXiv},
269
+ primaryClass={cs.CL}
270
+ }
271
  ```