BAAI
/

ldwang commited on
Commit
24340fa
1 Parent(s): 1fd2b71
Files changed (1) hide show
  1. README.md +34 -16
README.md CHANGED
@@ -18,6 +18,7 @@ language:
18
  <a href="#evaluation">Evaluation</a> |
19
  <a href="#train">Train</a> |
20
  <a href="#contact">Contact</a> |
 
21
  <a href="#license">License</a>
22
  <p>
23
  </h4>
@@ -31,6 +32,7 @@ FlagEmbedding can map any text to a low-dimensional dense vector which can be us
31
  And it also can be used in vector databases for LLMs.
32
 
33
  ************* 🌟**Updates**🌟 *************
 
34
  - 09/12/2023: New Release:
35
  - **New reranker model**: release cross-encoder models `BAAI/bge-reranker-base` and `BAAI/bge-reranker-large`, which are more powerful than embedding model. We recommend to use/fine-tune them to re-rank top-k documents returned by embedding models.
36
  - **update embedding model**: release `bge-*-v1.5` embedding model to alleviate the issue of the similarity distribution, and enhance its retrieval ability without instruction.
@@ -65,10 +67,9 @@ And it also can be used in vector databases for LLMs.
65
 
66
  \*: If you need to search the relevant passages to a query, we suggest to add the instruction to the query; in other cases, no instruction is needed, just use the original query directly. In all cases, **no instruction** needs to be added to passages.
67
 
68
- \**: Different embedding model, reranker is a cross-encoder, which cannot be used to generate embedding. To balance the accuracy and time cost, cross-encoder is widely used to re-rank top-k documents retrieved by other simple models.
69
  For examples, use bge embedding model to retrieve top 100 relevant documents, and then use bge reranker to re-rank the top 100 document to get the final top-3 results.
70
 
71
-
72
  ## Frequently asked questions
73
 
74
  <details>
@@ -131,7 +132,9 @@ If it doesn't work for you, you can see [FlagEmbedding](https://github.com/FlagO
131
  from FlagEmbedding import FlagModel
132
  sentences_1 = ["样例数据-1", "样例数据-2"]
133
  sentences_2 = ["样例数据-3", "样例数据-4"]
134
- model = FlagModel('BAAI/bge-large-zh', query_instruction_for_retrieval="为这个句子生成表示以用于检索相关文章:")
 
 
135
  embeddings_1 = model.encode(sentences_1)
136
  embeddings_2 = model.encode(sentences_2)
137
  similarity = embeddings_1 @ embeddings_2.T
@@ -162,7 +165,7 @@ pip install -U sentence-transformers
162
  from sentence_transformers import SentenceTransformer
163
  sentences_1 = ["样例数据-1", "样例数据-2"]
164
  sentences_2 = ["样例数据-3", "样例数据-4"]
165
- model = SentenceTransformer('BAAI/bge-large-zh')
166
  embeddings_1 = model.encode(sentences_1, normalize_embeddings=True)
167
  embeddings_2 = model.encode(sentences_2, normalize_embeddings=True)
168
  similarity = embeddings_1 @ embeddings_2.T
@@ -177,7 +180,7 @@ queries = ['query_1', 'query_2']
177
  passages = ["样例文档-1", "样例文档-2"]
178
  instruction = "为这个句子生成表示以用于检索相关文章:"
179
 
180
- model = SentenceTransformer('BAAI/bge-large-zh')
181
  q_embeddings = model.encode([instruction+q for q in queries], normalize_embeddings=True)
182
  p_embeddings = model.encode(passages, normalize_embeddings=True)
183
  scores = q_embeddings @ p_embeddings.T
@@ -188,7 +191,7 @@ scores = q_embeddings @ p_embeddings.T
188
  You can use `bge` in langchain like this:
189
  ```python
190
  from langchain.embeddings import HuggingFaceBgeEmbeddings
191
- model_name = "BAAI/bge-small-en"
192
  model_kwargs = {'device': 'cuda'}
193
  encode_kwargs = {'normalize_embeddings': True} # set True to compute cosine similarity
194
  model = HuggingFaceBgeEmbeddings(
@@ -212,8 +215,8 @@ import torch
212
  sentences = ["样例数据-1", "样例数据-2"]
213
 
214
  # Load model from HuggingFace Hub
215
- tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-large-zh')
216
- model = AutoModel.from_pretrained('BAAI/bge-large-zh')
217
  model.eval()
218
 
219
  # Tokenize sentences
@@ -233,6 +236,7 @@ print("Sentence embeddings:", sentence_embeddings)
233
 
234
  ### Usage for Reranker
235
 
 
236
  You can get a relevance score by inputting query and passage to the reranker.
237
  The reranker is optimized based cross-entropy loss, so the relevance score is not bounded to a specific range.
238
 
@@ -242,10 +246,10 @@ The reranker is optimized based cross-entropy loss, so the relevance score is no
242
  pip install -U FlagEmbedding
243
  ```
244
 
245
- Get relevance score:
246
  ```python
247
  from FlagEmbedding import FlagReranker
248
- reranker = FlagReranker('BAAI/bge-reranker-base', use_fp16=True) #use fp16 can speed up computing
249
 
250
  score = reranker.compute_score(['query', 'passage'])
251
  print(score)
@@ -259,10 +263,10 @@ print(scores)
259
 
260
  ```python
261
  import torch
262
- from transformers import AutoModelForSequenceClassification, AutoTokenizer, BatchEncoding, PreTrainedTokenizerFast
263
 
264
- tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-reranker-base')
265
- model = AutoModelForSequenceClassification.from_pretrained('BAAI/bge-reranker-base')
266
  model.eval()
267
 
268
  pairs = [['what is panda?', 'hi'], ['what is panda?', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']]
@@ -328,7 +332,7 @@ Please refer to [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/blob/master/C
328
  - **Reranking**:
329
  See [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/) for evaluation script.
330
 
331
- | Model | T2Reranking | T2RerankingZh2En\* | T2RerankingEn2Zh\* | MmarcoReranking | CMedQAv1 | CMedQAv2 | Avg |
332
  |:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
333
  | text2vec-base-multilingual | 64.66 | 62.94 | 62.51 | 14.37 | 48.46 | 48.6 | 50.26 |
334
  | multilingual-e5-small | 65.62 | 60.94 | 56.41 | 29.91 | 67.26 | 66.54 | 57.78 |
@@ -341,13 +345,13 @@ See [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/) for
341
  | [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | 67.28 | 63.95 | 60.45 | 35.46 | 81.26 | 84.1 | 65.42 |
342
  | [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | 67.6 | 64.03 | 61.44 | 37.16 | 82.15 | 84.18 | 66.09 |
343
 
344
- \* : T2RerankingZh2En and T2RerankingEn2Zh are cross-language retrieval task
345
 
346
  ## Train
347
 
348
  ### BAAI Embedding
349
 
350
- We pre-train the models using retromae and train them on large-scale pairs data using contrastive learning.
351
  **You can fine-tune the embedding model on your data following our [examples](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune).**
352
  We also provide a [pre-train example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/pretrain).
353
  Note that the goal of pre-training is to reconstruct the text, and the pre-trained model cannot be used for similarity calculation directly, it needs to be fine-tuned.
@@ -370,6 +374,20 @@ If you have any question or suggestion related to this project, feel free to ope
370
  You also can email Shitao Xiao(stxiao@baai.ac.cn) and Zheng Liu(liuzheng@baai.ac.cn).
371
 
372
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
373
  ## License
374
  FlagEmbedding is licensed under the [MIT License](https://github.com/FlagOpen/FlagEmbedding/blob/master/LICENSE). The released models can be used for commercial purposes free of charge.
375
 
 
18
  <a href="#evaluation">Evaluation</a> |
19
  <a href="#train">Train</a> |
20
  <a href="#contact">Contact</a> |
21
+ <a href="#citation">Citation</a> |
22
  <a href="#license">License</a>
23
  <p>
24
  </h4>
 
32
  And it also can be used in vector databases for LLMs.
33
 
34
  ************* 🌟**Updates**🌟 *************
35
+ - 09/15/2023: Release [paper](https://arxiv.org/pdf/2309.07597.pdf) and [dataset](https://data.baai.ac.cn/details/BAAI-MTP).
36
  - 09/12/2023: New Release:
37
  - **New reranker model**: release cross-encoder models `BAAI/bge-reranker-base` and `BAAI/bge-reranker-large`, which are more powerful than embedding model. We recommend to use/fine-tune them to re-rank top-k documents returned by embedding models.
38
  - **update embedding model**: release `bge-*-v1.5` embedding model to alleviate the issue of the similarity distribution, and enhance its retrieval ability without instruction.
 
67
 
68
  \*: If you need to search the relevant passages to a query, we suggest to add the instruction to the query; in other cases, no instruction is needed, just use the original query directly. In all cases, **no instruction** needs to be added to passages.
69
 
70
+ \**: Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding. To balance the accuracy and time cost, cross-encoder is widely used to re-rank top-k documents retrieved by other simple models.
71
  For examples, use bge embedding model to retrieve top 100 relevant documents, and then use bge reranker to re-rank the top 100 document to get the final top-3 results.
72
 
 
73
  ## Frequently asked questions
74
 
75
  <details>
 
132
  from FlagEmbedding import FlagModel
133
  sentences_1 = ["样例数据-1", "样例数据-2"]
134
  sentences_2 = ["样例数据-3", "样例数据-4"]
135
+ model = FlagModel('BAAI/bge-large-zh-v1.5',
136
+ query_instruction_for_retrieval="为这个句子生成表示以用于检索相关文章:",
137
+ use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation
138
  embeddings_1 = model.encode(sentences_1)
139
  embeddings_2 = model.encode(sentences_2)
140
  similarity = embeddings_1 @ embeddings_2.T
 
165
  from sentence_transformers import SentenceTransformer
166
  sentences_1 = ["样例数据-1", "样例数据-2"]
167
  sentences_2 = ["样例数据-3", "样例数据-4"]
168
+ model = SentenceTransformer('BAAI/bge-large-zh-v1.5')
169
  embeddings_1 = model.encode(sentences_1, normalize_embeddings=True)
170
  embeddings_2 = model.encode(sentences_2, normalize_embeddings=True)
171
  similarity = embeddings_1 @ embeddings_2.T
 
180
  passages = ["样例文档-1", "样例文档-2"]
181
  instruction = "为这个句子生成表示以用于检索相关文章:"
182
 
183
+ model = SentenceTransformer('BAAI/bge-large-zh-v1.5')
184
  q_embeddings = model.encode([instruction+q for q in queries], normalize_embeddings=True)
185
  p_embeddings = model.encode(passages, normalize_embeddings=True)
186
  scores = q_embeddings @ p_embeddings.T
 
191
  You can use `bge` in langchain like this:
192
  ```python
193
  from langchain.embeddings import HuggingFaceBgeEmbeddings
194
+ model_name = "BAAI/bge-large-en-v1.5"
195
  model_kwargs = {'device': 'cuda'}
196
  encode_kwargs = {'normalize_embeddings': True} # set True to compute cosine similarity
197
  model = HuggingFaceBgeEmbeddings(
 
215
  sentences = ["样例数据-1", "样例数据-2"]
216
 
217
  # Load model from HuggingFace Hub
218
+ tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-large-zh-v1.5')
219
+ model = AutoModel.from_pretrained('BAAI/bge-large-zh-v1.5')
220
  model.eval()
221
 
222
  # Tokenize sentences
 
236
 
237
  ### Usage for Reranker
238
 
239
+ Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding.
240
  You can get a relevance score by inputting query and passage to the reranker.
241
  The reranker is optimized based cross-entropy loss, so the relevance score is not bounded to a specific range.
242
 
 
246
  pip install -U FlagEmbedding
247
  ```
248
 
249
+ Get relevance scores (higher scores indicate more relevance):
250
  ```python
251
  from FlagEmbedding import FlagReranker
252
+ reranker = FlagReranker('BAAI/bge-reranker-large', use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation
253
 
254
  score = reranker.compute_score(['query', 'passage'])
255
  print(score)
 
263
 
264
  ```python
265
  import torch
266
+ from transformers import AutoModelForSequenceClassification, AutoTokenizer
267
 
268
+ tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-reranker-large')
269
+ model = AutoModelForSequenceClassification.from_pretrained('BAAI/bge-reranker-large')
270
  model.eval()
271
 
272
  pairs = [['what is panda?', 'hi'], ['what is panda?', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']]
 
332
  - **Reranking**:
333
  See [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/) for evaluation script.
334
 
335
+ | Model | T2Reranking | T2RerankingZh2En\* | T2RerankingEn2Zh\* | MMarcoReranking | CMedQAv1 | CMedQAv2 | Avg |
336
  |:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
337
  | text2vec-base-multilingual | 64.66 | 62.94 | 62.51 | 14.37 | 48.46 | 48.6 | 50.26 |
338
  | multilingual-e5-small | 65.62 | 60.94 | 56.41 | 29.91 | 67.26 | 66.54 | 57.78 |
 
345
  | [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | 67.28 | 63.95 | 60.45 | 35.46 | 81.26 | 84.1 | 65.42 |
346
  | [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | 67.6 | 64.03 | 61.44 | 37.16 | 82.15 | 84.18 | 66.09 |
347
 
348
+ \* : T2RerankingZh2En and T2RerankingEn2Zh are cross-language retrieval tasks
349
 
350
  ## Train
351
 
352
  ### BAAI Embedding
353
 
354
+ We pre-train the models using [retromae](https://github.com/staoxiao/RetroMAE) and train them on large-scale pairs data using contrastive learning.
355
  **You can fine-tune the embedding model on your data following our [examples](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune).**
356
  We also provide a [pre-train example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/pretrain).
357
  Note that the goal of pre-training is to reconstruct the text, and the pre-trained model cannot be used for similarity calculation directly, it needs to be fine-tuned.
 
374
  You also can email Shitao Xiao(stxiao@baai.ac.cn) and Zheng Liu(liuzheng@baai.ac.cn).
375
 
376
 
377
+ ## Citation
378
+
379
+ If you find our work helpful, please cite us:
380
+ ```
381
+ @misc{bge_embedding,
382
+ title={C-Pack: Packaged Resources To Advance General Chinese Embedding},
383
+ author={Shitao Xiao and Zheng Liu and Peitian Zhang and Niklas Muennighoff},
384
+ year={2023},
385
+ eprint={2309.07597},
386
+ archivePrefix={arXiv},
387
+ primaryClass={cs.CL}
388
+ }
389
+ ```
390
+
391
  ## License
392
  FlagEmbedding is licensed under the [MIT License](https://github.com/FlagOpen/FlagEmbedding/blob/master/LICENSE). The released models can be used for commercial purposes free of charge.
393