DavidDutour commited on
Commit
2de8ab1
·
verified ·
1 Parent(s): 79ea563

Upload 11 files

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
1_Pooling/config.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 1024,
3
+ "pooling_mode_cls_token": true,
4
+ "pooling_mode_mean_tokens": false,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false
7
+ }
README.md ADDED
@@ -0,0 +1,251 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pipeline_tag: sentence-similarity
3
+ tags:
4
+ - sentence-transformers
5
+ - feature-extraction
6
+ - sentence-similarity
7
+ license: mit
8
+ ---
9
+
10
+ For more details please refer to our github repo: https://github.com/FlagOpen/FlagEmbedding
11
+
12
+ # BGE-M3
13
+ In this project, we introduce BGE-M3, which is distinguished for its versatility in Multi-Functionality, Multi-Linguality, and Multi-Granularity.
14
+ - Multi-Functionality: It can simultaneously perform the three common retrieval functionalities of embedding model: dense retrieval, multi-vector retrieval, and sparse retrieval.
15
+ - Multi-Linguality: It can support more than 100 working languages.
16
+ - Multi-Granularity: It is able to process inputs of different granularities, spanning from short sentences to long documents of up to 8192 tokens.
17
+
18
+ **Some suggestions for retrieval pipeline in RAG:**
19
+ We recommend to use following pipeline: hybrid retrieval + re-ranking.
20
+ - Hybrid retrieval leverages the strengths of various methods, offering higher accuracy and stronger generalization capabilities.
21
+ A classic example: using both embedding retrieval and the BM25 algorithm.
22
+ Now, you can try to use BGE-M3, which supports both embedding and sparse retrieval.
23
+ This allows you to obtain token weights (similar to the BM25) without any additional cost when generate dense embeddings.
24
+ - As cross-encoder models, re-ranker demonstrates higher accuracy than bi-encoder embedding model.
25
+ Utilizing the re-ranking model (e.g., [bge-reranker](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/reranker), [cohere-reranker](https://txt.cohere.com/rerank/)) after retrieval can further filter the selected text.
26
+
27
+
28
+ ## News:
29
+ - 2/1/2024: **Thanks for the excellent tool from Vespa.** You can easily use multiple modes of BGE-M3 following this [notebook](https://github.com/vespa-engine/pyvespa/blob/master/docs/sphinx/source/examples/mother-of-all-embedding-models-cloud.ipynb)
30
+
31
+
32
+ ## Model Specs
33
+
34
+ | Model Name | Dimension | Sequence Length |
35
+ |:----:|:---:|:---:|
36
+ | [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) | 1024 | 8192 |
37
+ | [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | 1024 | 512 |
38
+ | [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | 768 | 512 |
39
+ | [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | 384 | 512 |
40
+
41
+
42
+
43
+ ## FAQ
44
+
45
+ **1. Introduction for different retrieval methods**
46
+
47
+ - Dense retrieval: map the text into a single embedding, e.g., [DPR](https://arxiv.org/abs/2004.04906), [BGE-v1.5](https://github.com/FlagOpen/FlagEmbedding)
48
+ - Sparse retrieval (lexical matching): a vector of size equal to the vocabulary, with the majority of positions set to zero, calculating a weight only for tokens present in the text. e.g., BM25, [unicoil](https://arxiv.org/pdf/2106.14807.pdf), and [splade](https://arxiv.org/abs/2107.05720)
49
+ - Multi-vector retrieval: use multiple vectors to represent a text, e.g., [ColBERT](https://arxiv.org/abs/2004.12832).
50
+
51
+
52
+ **2. Comparison with BGE-v1.5 and other monolingual models**
53
+
54
+ BGE-M3 is a multilingual model, and its ability in monolingual embedding retrieval may not surpass models specifically designed for single languages.
55
+ However, we still recommend trying BGE-M3 because of its versatility (support for multiple languages and long texts).
56
+ Moreover, it can simultaneously generate multiple representations, and using them together can enhance accuracy and generalization,
57
+ unlike most existing models that can only perform dense retrieval.
58
+
59
+ In the open-source community, there are many excellent models (e.g., jina-embedding, colbert, e5, etc),
60
+ and users can choose a model that suits their specific needs based on practical considerations,
61
+ such as whether to require multilingual or cross-language support, and whether to process long texts.
62
+
63
+ **3. How to use BGE-M3 in other projects?**
64
+
65
+ For embedding retrieval, you can employ the BGE-M3 model using the same approach as BGE.
66
+ The only difference is that the BGE-M3 model no longer requires adding instructions to the queries.
67
+ For sparse retrieval methods, most open-source libraries currently do not support direct utilization of the BGE-M3 model.
68
+ Contributions from the community are welcome.
69
+
70
+
71
+ **4. How to fine-tune bge-M3 model?**
72
+
73
+ You can follow the common in this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune)
74
+ to fine-tune the dense embedding.
75
+
76
+ Our code and data for unified fine-tuning (dense, sparse, and multi-vectors) will be released.
77
+
78
+
79
+
80
+
81
+ ## Usage
82
+
83
+ Install:
84
+ ```
85
+ git clone https://github.com/FlagOpen/FlagEmbedding.git
86
+ cd FlagEmbedding
87
+ pip install -e .
88
+ ```
89
+ or:
90
+ ```
91
+ pip install -U FlagEmbedding
92
+ ```
93
+
94
+
95
+
96
+ ### Generate Embedding for text
97
+
98
+ - Dense Embedding
99
+ ```python
100
+ from FlagEmbedding import BGEM3FlagModel
101
+
102
+ model = BGEM3FlagModel('BAAI/bge-m3',
103
+ use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation
104
+
105
+ sentences_1 = ["What is BGE M3?", "Defination of BM25"]
106
+ sentences_2 = ["BGE M3 is an embedding model supporting dense retrieval, lexical matching and multi-vector interaction.",
107
+ "BM25 is a bag-of-words retrieval function that ranks a set of documents based on the query terms appearing in each document"]
108
+
109
+ embeddings_1 = model.encode(sentences_1,
110
+ batch_size=12,
111
+ max_length=8192, # If you don't need such a long length, you can set a smaller value to speed up the encoding process.
112
+ )['dense_vecs']
113
+ embeddings_2 = model.encode(sentences_2)['dense_vecs']
114
+ similarity = embeddings_1 @ embeddings_2.T
115
+ print(similarity)
116
+ # [[0.6265, 0.3477], [0.3499, 0.678 ]]
117
+ ```
118
+ You also can use sentence-transformers and huggingface transformers to generate dense embeddings.
119
+ Refer to [baai_general_embedding](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/baai_general_embedding#usage) for details.
120
+
121
+
122
+ - Sparse Embedding (Lexical Weight)
123
+ ```python
124
+ from FlagEmbedding import BGEM3FlagModel
125
+
126
+ model = BGEM3FlagModel('BAAI/bge-m3', use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation
127
+
128
+ sentences_1 = ["What is BGE M3?", "Defination of BM25"]
129
+ sentences_2 = ["BGE M3 is an embedding model supporting dense retrieval, lexical matching and multi-vector interaction.",
130
+ "BM25 is a bag-of-words retrieval function that ranks a set of documents based on the query terms appearing in each document"]
131
+
132
+ output_1 = model.encode(sentences_1, return_dense=True, return_sparse=True, return_colbert_vecs=False)
133
+ output_2 = model.encode(sentences_2, return_dense=True, return_sparse=True, return_colbert_vecs=False)
134
+
135
+ # you can see the weight for each token:
136
+ print(model.convert_id_to_token(output_1['lexical_weights']))
137
+ # [{'What': 0.08356, 'is': 0.0814, 'B': 0.1296, 'GE': 0.252, 'M': 0.1702, '3': 0.2695, '?': 0.04092},
138
+ # {'De': 0.05005, 'fin': 0.1368, 'ation': 0.04498, 'of': 0.0633, 'BM': 0.2515, '25': 0.3335}]
139
+
140
+
141
+ # compute the scores via lexical mathcing
142
+ lexical_scores = model.compute_lexical_matching_score(output_1['lexical_weights'][0], output_2['lexical_weights'][0])
143
+ print(lexical_scores)
144
+ # 0.19554901123046875
145
+
146
+ print(model.compute_lexical_matching_score(output_1['lexical_weights'][0], output_1['lexical_weights'][1]))
147
+ # 0.0
148
+ ```
149
+
150
+ - Multi-Vector (ColBERT)
151
+ ```python
152
+ from FlagEmbedding import BGEM3FlagModel
153
+
154
+ model = BGEM3FlagModel('BAAI/bge-m3', use_fp16=True)
155
+
156
+ sentences_1 = ["What is BGE M3?", "Defination of BM25"]
157
+ sentences_2 = ["BGE M3 is an embedding model supporting dense retrieval, lexical matching and multi-vector interaction.",
158
+ "BM25 is a bag-of-words retrieval function that ranks a set of documents based on the query terms appearing in each document"]
159
+
160
+ output_1 = model.encode(sentences_1, return_dense=True, return_sparse=True, return_colbert_vecs=True)
161
+ output_2 = model.encode(sentences_2, return_dense=True, return_sparse=True, return_colbert_vecs=True)
162
+
163
+ print(model.colbert_score(output_1['colbert_vecs'][0], output_2['colbert_vecs'][0]))
164
+ print(model.colbert_score(output_1['colbert_vecs'][0], output_2['colbert_vecs'][1]))
165
+ # 0.7797
166
+ # 0.4620
167
+ ```
168
+
169
+
170
+ ### Compute score for text pairs
171
+ Input a list of text pairs, you can get the scores computed by different methods.
172
+ ```python
173
+ from FlagEmbedding import BGEM3FlagModel
174
+
175
+ model = BGEM3FlagModel('BAAI/bge-m3', use_fp16=True)
176
+
177
+ sentences_1 = ["What is BGE M3?", "Defination of BM25"]
178
+ sentences_2 = ["BGE M3 is an embedding model supporting dense retrieval, lexical matching and multi-vector interaction.",
179
+ "BM25 is a bag-of-words retrieval function that ranks a set of documents based on the query terms appearing in each document"]
180
+
181
+ sentence_pairs = [[i,j] for i in sentences_1 for j in sentences_2]
182
+
183
+ print(model.compute_score(sentence_pairs,
184
+ max_passage_length=128, # a smaller max length leads to a lower latency
185
+ weights_for_different_modes=[0.4, 0.2, 0.4])) # weights_for_different_modes(w) is used to do weighted sum: w[0]*dense_score + w[1]*sparse_score + w[2]*colbert_score
186
+
187
+ # {
188
+ # 'colbert': [0.7796499729156494, 0.4621465802192688, 0.4523794651031494, 0.7898575067520142],
189
+ # 'sparse': [0.195556640625, 0.00879669189453125, 0.0, 0.1802978515625],
190
+ # 'dense': [0.6259765625, 0.347412109375, 0.349853515625, 0.67822265625],
191
+ # 'sparse+dense': [0.482503205537796, 0.23454029858112335, 0.2332356721162796, 0.5122477412223816],
192
+ # 'colbert+sparse+dense': [0.6013619303703308, 0.3255828022956848, 0.32089319825172424, 0.6232916116714478]
193
+ # }
194
+ ```
195
+
196
+
197
+
198
+
199
+ ## Evaluation
200
+
201
+ - Multilingual (Miracl dataset)
202
+
203
+ ![avatar](./imgs/miracl.jpg)
204
+
205
+ - Cross-lingual (MKQA dataset)
206
+
207
+ ![avatar](./imgs/mkqa.jpg)
208
+
209
+ - Long Document Retrieval
210
+ - MLDR:
211
+ ![avatar](./imgs/long.jpg)
212
+ Please note that MLDR is a document retrieval dataset we constructed via LLM,
213
+ covering 13 languages, including test set, validation set, and training set.
214
+ We utilized the training set from MLDR to enhance the model's long document retrieval capabilities.
215
+ Therefore, comparing baseline with `Dense w.o.long`(fine-tuning without long document dataset) is more equitable.
216
+ Additionally, this long document retrieval dataset will be open-sourced to address the current lack of open-source multilingual long text retrieval datasets.
217
+ We believe that this data will be helpful for the open-source community in training document retrieval models.
218
+
219
+ - NarritiveQA:
220
+ ![avatar](./imgs/nqa.jpg)
221
+
222
+
223
+ ## Training
224
+ - Self-knowledge Distillation: combining multiple outputs from different
225
+ retrieval modes as reward signal to enhance the performance of single mode(especially for sparse retrieval and multi-vec(colbert) retrival)
226
+ - Efficient Batching: Improve the efficiency when fine-tuning on long text.
227
+ The small-batch strategy is simple but effective, which also can used to fine-tune large embedding model.
228
+ - MCLS: A simple method to improve the performance on long text without fine-tuning.
229
+ If you have no enough resource to fine-tuning model with long text, the method is useful.
230
+
231
+ Refer to our [report](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/BGE_M3/BGE_M3.pdf) for more details.
232
+
233
+ **The fine-tuning codes and datasets will be open-sourced in the near future.**
234
+
235
+ ## Models
236
+
237
+ We release two versions:
238
+ - BAAI/bge-m3-unsupervised: the model after contrastive learning in a large-scale dataset
239
+ - BAAI/bge-m3: the final model fine-tuned from BAAI/bge-m3-unsupervised
240
+
241
+ ## Acknowledgement
242
+
243
+ Thanks the authors of open-sourced datasets, including Miracl, MKQA, NarritiveQA, etc.
244
+
245
+ ## Citation
246
+
247
+ If you find this repository useful, please consider giving a star :star: and citation
248
+
249
+ ```
250
+
251
+ ```
config.json ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "",
3
+ "architectures": [
4
+ "XLMRobertaModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "bos_token_id": 0,
8
+ "classifier_dropout": null,
9
+ "eos_token_id": 2,
10
+ "hidden_act": "gelu",
11
+ "hidden_dropout_prob": 0.1,
12
+ "hidden_size": 1024,
13
+ "initializer_range": 0.02,
14
+ "intermediate_size": 4096,
15
+ "layer_norm_eps": 1e-05,
16
+ "max_position_embeddings": 8194,
17
+ "model_type": "xlm-roberta",
18
+ "num_attention_heads": 16,
19
+ "num_hidden_layers": 24,
20
+ "output_past": true,
21
+ "pad_token_id": 1,
22
+ "position_embedding_type": "absolute",
23
+ "torch_dtype": "float32",
24
+ "transformers_version": "4.33.0",
25
+ "type_vocab_size": 1,
26
+ "use_cache": true,
27
+ "vocab_size": 250002
28
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "2.2.2",
4
+ "transformers": "4.33.0",
5
+ "pytorch": "2.1.2+cu121"
6
+ }
7
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:993b2248881724788dcab8c644a91dfd63584b6e5604ff2037cb5541e1e38e7e
3
+ size 2271064456
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
+ }
20
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 8192,
3
+ "do_lower_case": false
4
+ }
sentencepiece.bpe.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cfc8146abe2a0488e9e2a0c56de7952f7c11ab059eca145a0a727afce0db2865
3
+ size 5069051
special_tokens_map.json ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "cls_token": {
10
+ "content": "<s>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "eos_token": {
17
+ "content": "</s>",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "mask_token": {
24
+ "content": "<mask>",
25
+ "lstrip": true,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "pad_token": {
31
+ "content": "<pad>",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ },
37
+ "sep_token": {
38
+ "content": "</s>",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false
43
+ },
44
+ "unk_token": {
45
+ "content": "<unk>",
46
+ "lstrip": false,
47
+ "normalized": false,
48
+ "rstrip": false,
49
+ "single_word": false
50
+ }
51
+ }
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:21106b6d7dab2952c1d496fb21d5dc9db75c28ed361a05f5020bbba27810dd08
3
+ size 17098108
tokenizer_config.json ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "<s>",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "1": {
12
+ "content": "<pad>",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "2": {
20
+ "content": "</s>",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "250001": {
28
+ "content": "<mask>",
29
+ "lstrip": true,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "3": {
36
+ "content": "<unk>",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "bos_token": "<s>",
45
+ "clean_up_tokenization_spaces": true,
46
+ "cls_token": "<s>",
47
+ "eos_token": "</s>",
48
+ "mask_token": {
49
+ "__type": "AddedToken",
50
+ "content": "<mask>",
51
+ "lstrip": true,
52
+ "normalized": true,
53
+ "rstrip": false,
54
+ "single_word": false
55
+ },
56
+ "model_max_length": 8192,
57
+ "pad_token": "<pad>",
58
+ "sep_token": "</s>",
59
+ "sp_model_kwargs": {},
60
+ "tokenizer_class": "XLMRobertaTokenizer",
61
+ "unk_token": "<unk>"
62
+ }